stub International Scientists Call for More Transparency in AI Research - Unite.AI
Connect with us

Ethics

International Scientists Call for More Transparency in AI Research

Published

 on

A group of international scientists coming from various institutions including Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, and Massachusetts Institute of Technology are calling for more transparency within artificial intelligence (AI) research. The major force behind this call is to free up important findings that could help accelerate cancer treatment based on the research. 

The article in which the scientists called on scientific journals to raise their standards when it comes to transparency among computational researchers was published in Nature on October 14, 2020. The group also advocated that their colleagues should release code, model, and computational environments in publications. 

The paper was titled “Transparency and reproducibility in artificial intelligence.” 

Releasing AI Study Details

Dr. Benjamin Haibe-Kains is a Senior Scientist at Princess Margaret Cancer Centre and first author of the publication. 

“Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from,” Dr. Haibe-Kains says. “But in computational research, it’s not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress.” 

The concerns arose following a Google Health study that was published by McKinney et al. in a major scientific journal back in 2020, in which it was claimed an AI system could outperform human radiologists in robustness and speed when it comes to breast cancer screening. The study received a lot of media attention across various top publications. 

Unableness to Reproduce Models

One of the major concerns that arose following the study was that it did not thoroughly describe the methods used, as well as the code and models. This lack of transparency meant researchers could not learn how the model operates, resulting in the model not being able to be used by other institutions. 

“On paper and in theory, the McKinney et al. study is beautiful,” Dr. Haibe-Kains says. “But if we can’t learn from it then it has little to no scientific value.”

Dr. Haibe-Kains was jointly appointed as Associate Professor in Medical Biophysics at the University of Toronto. He is also an affiliate at the Vector Institute for Artificial Intelligence. 

“Researchers are more incentivized to publish their finding rather than spend time and resources ensuring their study can be replicated,” Dr. Haibe-Kains continues. “Journals are vulnerable to the ‘hype' of AI and may lower the standards for accepting papers that don't include all the materials required to make the study reproducible — often in contradiction to their own guidelines.”

This environment means AI models could take longer to reach clinical settings, and the models cannot be replicated or learned from by researchers. 

The group of researchers proposed various frameworks and platforms to remedy this issue and allow for the methods to be shared. 

“We have high hopes for the utility of AI for our cancer patients,” Dr. Haibe-Kains says. “Sharing and building upon our discoveries — that’s real scientific impact.”

 

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.