stub Paper Examines How To Reduce Risk Of Using AI in Medicine - Unite.AI
Connect with us

Regulation

Paper Examines How To Reduce Risk Of Using AI in Medicine

mm
Updated on

Artificial intelligence programs are capable of improving healthcare in a variety of different ways. For instance, AI applications can use computer vision to help doctors diagnose conditions from X-rays and FMRIs. Machine learning algorithms can also be used to help reduce false-positive rates by extracting subtle patterns from data that humans may not be able to find in medical data. However, with the possibilities comes new challenges, and recently a new article was published in Science that examined possible risks and regulatory strategies for medical machine learning techniques in an effort to minimize any possible negative side effects of employing AI in a medical context.

Expanding Applications For AI In Healthcare

AI is seeing its applications in the medical field expand rapidly.  Recent developments in the field of healthcare, driven by AI, include the creation of a new pharmaceutical company that aims to use AI to create new drugs, the creation of AI-drive remote health sensors, and computer vision apps that analyze CT scans and X-rays.

To be more precise, Genesis Therapeutics is a startup that is aiming to use AI to speed up the process of drug-discovery, hoping to create drugs that can reduce the severity of debilitating diseases. Genesis Therapeutics is just one of almost 170 different firms using AI to research new drug formulations. Meanwhile, in terms of health monitoring devices, iRhythm and French AI startup Cardiologs are making use of AI algorithms to analyze EEG data and monitor the health of those who have heart conditions are at risk of complications. The software designed by the companies can detect heart murmurs, a condition caused by turbulent blood flow.

Finally, a recent study investigating how computer vision can be applied to medical images found that computer vision systems perform at least as well or better than expert radiologists when examining CT scans to find small hemorrhages. The algorithms used in the study were able to render predictions after examining CT scans for just one second. The computer vision systems were also able to localize the hemorrhage within the brain.

So while the potential benefits of using AI in healthcare are clear, what is less clear is what new challenges and risks will arise as a side-effect of employing AI within the healthcare field.

Regulating An Expanding Field

As TechXplore reported, in order to assess potential drawbacks of using AI in healthcare,  a group of researches recently published a paper in Science, aiming to derive answers to anticipate potential problems with AI and explore potential solutions to these problems. Problems that may arise from using AI in the healthcare field include the inappropriate recommendation of treatments resulting in injury, privacy concerns, and algorithmic bias/inequality.

The FDA has only approved medical AI that uses “locked algorithms”, algorithms that reliably produce the same result every time they are run. However, much of AI’s potential lies in its ability to learn and respond to new types of inputs. In order to enable “adaptive algorithms” to see more use and get approval from the FDA, the authors of the paper took an in-depth look at how the risks related to updating algorithms can be mitigated.

The authors advocate that machine learning engineers and researchers should focus on continuous monitoring of models over the lifetime of their deployment. Among the suggested tools to monitor AI systems was AI itself, which could help give automated reports on how an AI is behaving. It’s also possible that multiple AI devices could monitor each other.

“To manage the risks, regulators should focus particularly on continuous monitoring and risk assessment, and less on planning for future algorithm changes,” said the authors of the paper.

The authors of the paper also recommend that regulators focus on developing new methods of identifying, monitoring, assessing, and managing risks. The paper applies many of the techniques that the FDA has used to regulate other forms of medical tech.

As the paper’s authors explained:

“Our goal is to emphasise the risks that can arise from unanticipated changes in how medical AI/ML systems react or adapt to their environments. Subtle, often unrecognised parametric updates or new types of data can cause large and costly mistakes.”