Artificial intelligence programs are capable of improving healthcare in a variety of different ways. For instance, AI applications can use computer vision to help doctors diagnose conditions from X-rays and FMRIs. Machine learning algorithms can also be used to help reduce false-positive rates by extracting subtle patterns from data that humans may not be able to find in medical data. However, with the possibilities comes new challenges, and recently a new article was published in Science that examined possible risks and regulatory strategies for medical machine learning techniques in an effort to minimize any possible negative side effects of employing AI in a medical context.
Expanding Applications For AI In Healthcare
AI is seeing its applications in the medical field expand rapidly. Recent developments in the field of healthcare, driven by AI, include the creation of a new pharmaceutical company that aims to use AI to create new drugs, the creation of AI-drive remote health sensors, and computer vision apps that analyze CT scans and X-rays.
To be more precise, Genesis Therapeutics is a startup that is aiming to use AI to speed up the process of drug-discovery, hoping to create drugs that can reduce the severity of debilitating diseases. Genesis Therapeutics is just one of almost 170 different firms using AI to research new drug formulations. Meanwhile, in terms of health monitoring devices, iRhythm and French AI startup Cardiologs are making use of AI algorithms to analyze EEG data and monitor the health of those who have heart conditions are at risk of complications. The software designed by the companies can detect heart murmurs, a condition caused by turbulent blood flow.
Finally, a recent study investigating how computer vision can be applied to medical images found that computer vision systems perform at least as well or better than expert radiologists when examining CT scans to find small hemorrhages. The algorithms used in the study were able to render predictions after examining CT scans for just one second. The computer vision systems were also able to localize the hemorrhage within the brain.
So while the potential benefits of using AI in healthcare are clear, what is less clear is what new challenges and risks will arise as a side-effect of employing AI within the healthcare field.
Regulating An Expanding Field
As TechXplore reported, in order to assess potential drawbacks of using AI in healthcare, a group of researches recently published a paper in Science, aiming to derive answers to anticipate potential problems with AI and explore potential solutions to these problems. Problems that may arise from using AI in the healthcare field include the inappropriate recommendation of treatments resulting in injury, privacy concerns, and algorithmic bias/inequality.
The FDA has only approved medical AI that uses “locked algorithms”, algorithms that reliably produce the same result every time they are run. However, much of AI’s potential lies in its ability to learn and respond to new types of inputs. In order to enable “adaptive algorithms” to see more use and get approval from the FDA, the authors of the paper took an in-depth look at how the risks related to updating algorithms can be mitigated.
The authors advocate that machine learning engineers and researchers should focus on continuous monitoring of models over the lifetime of their deployment. Among the suggested tools to monitor AI systems was AI itself, which could help give automated reports on how an AI is behaving. It’s also possible that multiple AI devices could monitor each other.
“To manage the risks, regulators should focus particularly on continuous monitoring and risk assessment, and less on planning for future algorithm changes,” said the authors of the paper.
The authors of the paper also recommend that regulators focus on developing new methods of identifying, monitoring, assessing, and managing risks. The paper applies many of the techniques that the FDA has used to regulate other forms of medical tech.
As the paper’s authors explained:
“Our goal is to emphasise the risks that can arise from unanticipated changes in how medical AI/ML systems react or adapt to their environments. Subtle, often unrecognised parametric updates or new types of data can cause large and costly mistakes.”
Scientists Detect Loneliness Through The Use Of AI And NLP
Researchers from the University of California San Diego School of Medicine have made use of artificial intelligence algorithms to quantify loneliness in older adults and determine how older adults might express loneliness in their speech.
Over the past twenty years or so, social scientists have described a trend of rising loneliness in the population. Studies done over the past decade in particular have documented rising loneliness rates across large swaths of society, which has impacts on depression rates, suicide rates, drug use, and general health. These problems are only exacerbated by the Covid-19 pandemic, as people are unable to safely meet up and socialize in person. Certain groups are more vulnerable to extreme loneliness, such as marginalized groups and older adults. As MedicalXpress reported, one study done by UC San Diego found that senior housing communities had loneliness rates approaching 85% when counting those who reported experiencing moderate or severe loneliness.
In order to determine solutions to this problem, social scientists need to get an accurate view of the situation, determining both the depth and breadth of the issue. Unfortunately, most methods of gathering data on loneliness are limited in notable respects. Self-reporting, for instance, can be biased towards the more extreme cases of loneliness. In addition, questions that directly ask study participants to quantify how “lonely” they feel can sometimes be inaccurate due to social stigmas surrounding loneliness.
In an effort to design a better metric for quantifying loneliness, the authors of the study turned to natural language processing and machine learning. The NLP methods used by the researchers are used alongside traditional loneliness measurement tools, and its hoped that analyzing the natural ways people use language will lead to a less biased, more honest representation of people’s loneliness.
The new study’s senior author was Ellen Lee, assistant professor of psychiatry at the School of Medicine, UC San Diego. Lee and the other researchers focused their study on 80 participants between the ages of 66 to 94. Participants in the study were encouraged by the researchers to answer questions in a way that was more natural and unstructured than most other studies. The researchers weren’t just asking questions and classifying answers. As the first author Ph.D. Varsha Badal, explained that using machine learning and NLP allowed the research team to take these long-form interview answer and find how subtle word choice and speech patterns could be indicative of loneliness when taken together:
“NLP and machine learning allow us to systematically examine long interviews from many individuals and explore how subtle speech features like emotions may indicate loneliness. Similar emotion analyses by humans would be open to bias, lack consistency, and require extensive training to standardize.”
According to the research team, individuals who were lonely had noticeable differences in the ways they responded to the questions compared to non-lonely respondents. Lonely respondents would express more sadness when asked questions regarding loneliness and had longer responses in general. Men were less likely to admit feeling lonely than women. In addition, men were more likely to use words expressing joy or fear than women were.
The researchers of the study explained that the results helped elucidate the differences between typical research metrics for loneliness and the way individuals subjectively experience and describe loneliness. The results of the study imply that loneliness could be detected through the analysis of speech patterns, and if these patterns prove to be reliable they could help diagnose and treat loneliness in older adults. The machine learning models designed by the researchers were able to predict qualitative loneliness with approximately 94% accuracy. More research will need to be conducted to see if the model is robust and if its success can be replicated. In the meantime, members of the research team are hoping to explore how NLP features might be correlated with wisdom and loneliness, which have an inverse correlation in older adults.
Updesh Dosanjh, Practice Leader, Technology Solutions, IQVIA – Interview Series
Updesh Dosanjh, Practice Leader of Technology Solutions at IQVIA, a world leader in using data, technology, advanced analytics and expertise to help customers drive healthcare – and human health – forward.
What is it that drew you initially to life sciences?
I’ve worked in multiple industries over the last 30 years, including the life sciences industry in the start of my career. When I chose to come back to the life sciences industry 15 years ago, it was to achieve three ambitions: work in an industry that contributed to the well-being of people; work in an area of industry that could be significantly helped by technology; and to work in an industry that gave me the chance to work with nice people. Working with a pharmacovigilance team in life sciences has helped me to meet all three of these goals.
Can you discuss what human data science is and its importance to IQVIA?
The volume of human health data is growing rapidly—by more than 878 percent since 2016. Increasingly, advanced analytics are needed to bring to light needed insights. Data science and technology are progressing rapidly, however, there continue to be challenges with the collection and analysis of structured and unstructured data, especially when coming from disparate and siloed data sources.
The emerging discipline of human data science integrates the study of human science with breakthroughs in data technology to tap into the potential value big data can provide in advancing the understanding of human health. In essence, the human data scientist serves as a translator between the world of the clinician and the world of the data specialist. This new paradigm is helping to tackle the challenges facing 21st-century health care.
IQVIA is uniquely positioned to collect, protect, classify and study the data that helps us answer questions about human health. As a leader in human data science, IQVIA has a deep level of life sciences expertise as well as sophisticated analytical capabilities to glean insights from a plethora of data points that can help life science customers bring new medications to market faster and drive toward better health outcomes. By understanding today’s challenges and being creative about how new innovations can accelerate new answers, IQVIA has leaned into the concept of human data science—transforming the way the life sciences industry finds patients, diagnoses illness, and treats conditions.
How can AI best assist drug researchers in narrowing down which specific drugs deserve more industry resources?
Bringing new medications to market is incredibly costly and time-consuming—on average, it takes about 10 years and costs $2.6 billion to do so. When drug developers explore a molecule’s potential to treat or prevent a disease, they analyze any available data relevant to that molecule, which requires significant time and resources. Furthermore, once a drug is introduced and brought to market, companies are responsible for pharmacovigilance in which they need to leverage technology to monitor adverse events (AEs)—any undesirable experiences associated with the use of a given medication—thus helping to ensure patient safety.
Artificial intelligence (AI) tools can help life sciences organizations automate manual data processing tasks to look for and track patterns within data. Rather than having to manually sift through hundreds or thousands of data points to uncover the most relevant insights pertaining to a particular treatment, AI can help life sciences teams effectively uncover the most important information and bring it to the forefront for further exploration and actionable insights. This ensures more time and resources from life science teams are reserved for strategic analysis and decision-making rather than for data reporting.
Life sciences companies are under more pressure than ever to innovate, as they strive to advance global health and stay competitive in a highly saturated marketplace. Natural language processing (NLP) is currently being leveraged by life science companies to help mine and “read” unstructured, text-based documents. However, there is still significant untapped potential for leveraging NLP in pharmacovigilance to further protect patient safety, as well as assure regulatory compliance. NLP has the potential to meet evolving compliance requirements, understand new data sources, and elevate new opportunities to drive innovation. It does so by combining and comparing AEs from decades of statistical legacy data and new incoming patient data–which can be processed in real-time—giving an unprecedented amount of visibility and clarity around information being mined from critical data sources.
Pharmacovigilance (the detection, collection, assessment, monitoring, and prevention of adverse effects with pharmaceutical products) is increasingly reliant on AI. Can you discuss some of the efforts being applied by IQVIA towards this?
As mentioned, one of the primary roles of pharmacovigilance (PV) departments is collecting and analyzing information on AEs. Today, approximately 80 percent of healthcare data resides in unstructured formats, like emails and paper documents, and AEs need to be aggregated and correlated from disparate and expansive data sources, including social media, online communities and other digital formats. What is more, language is subjective, and definitions are fluid. Although two patients taking the same medication may describe similar AE reactions, each patient may experience, measure, and describe pain or discomfort levels on a dynamic scale based on various factors. PV and safety professionals working at life sciences organizations that still rely on manual data reporting and processing need to review these extensive, varied, and complex data sets via inefficient processes. This not only slows down clinical trials but also potentially delays the introduction of new drugs to the marketplace, preventing patients from getting access to potentially life-saving medications.
The life sciences industry is highly data-driven, and there is no better ally for data analysis and pattern detection than AI. These tools are especially useful in processing and extrapolating large, complex PV data sets to help automate manual workloads and make the best use of the human assets on safety teams. Indeed, the adoption of AI and NLP tools within the life sciences industry is making it possible to take these large, unstructured data sets and turn them into actionable insights at unprecedented speed. Here are a few of the ways AI can improve operational efficiency for PV teams, which IQVIA actively delivers to its customers today:
- Speed literature searches for relevant information
- Scan social media across the globe to pinpoint AEs
- Listen and absorb audio calls (e.g. into a call center) for mentions of a company or drug
- Translate large amounts of information from one language into another
- Transform scanned documents on AEs into actionable information
- Read and interpret case narratives with minimal human guidance
- Determine whether any patterns in adverse reaction data are providing new, previously unrealized information that could improve patient safety
- Automate case follow-ups to verify information and capture any missing data
Is there anything else you would like to share about IQVIA?
IQVIA leverages its large data sets, advanced technology and deep domain expertise to provide the critical differentiator in providing AI tools that are specifically built and trained for the life sciences industry. This unique combination of attributes is what has contributed to the successful implementation of IQVIA technology across a wide array of industry players. This supports integrated global compliance efforts for the industry as well as improving patient safety.
Thank you for the great interview, readers who wish to learn more should visit IQVIA.
AI Algorithms Can Enhance the Creation of Bioscaffold Materials and Help Heal Wounds
Artificial intelligence and machine learning could help heal injuries by boosting the development speed of 3D printed bioscaffolds. Bioscaffolds are materials that allow organic objects, like skin and organs, to grow on them. Recent work done by researchers at Rice University applied AI algorithms to the development of bioscaffold materials, with the goal of predicting the quality of printed materials. The researchers found that controlling the speed of the printing is crucial to the development of useful bioscaffold implants.
As reported by ScienceDaily, team of researchers from Rice University collaborated to use machine learning to identify possible improvements to bioscaffold materials. Computer scientist Lydia Kavraki, from the Brown School of Engineering at Rice, lead a research team that applied machine learning algorithms to predict scaffold material quality. The study was co-authored by Rice bioengineer Antonios Mikos, who works on bone-like bioscaffolds that serve as tissue replacements, intended to support the growth of blood vessels and cells and enable wounded tissue to heal more quickly. The bioscaffolds Mikos works on are intended to heal musculoskeletal and craniofacial wounds. The bioscaffolds are produced with the assistance of 3D printing techniques that produce scaffolds that fit the perimeter of a given wound.
The process of 3D printing bioscaffold material requires a lot of trial and error to get the printed batch just right. Various parameters like material composition, structure, and spacing must be taken into account. The application of machine learning techniques can reduce much of this trial and error, giving the engineers useful guidelines that reduce the need to fiddle around with parameters. Kavraki and other researchers were able to give the bioengineering team feedback on which parameters were most important, those most likely to impact the quality of the printed material.
The research team started by analyzing data on printing scaffolds from a 2016 study on biodegradable polypropylene fumarate. Beyond this data, the researchers came up with a set of variables that would help them design a machine learning classifier. Once all the necessary data was collected, the researchers were able to design models, test them, and get the results published in just over half a year.
In terms of the machine learning models used by the research team, the team experimented with two different approaches. Both machine learning approaches were based on random forest algorithms, which aggregate decision trees to achieve a more robust and accurate model. One of the models that the team tested was a binary classification method that predicted if a particular set of parameters would result in a low or high-quality product. Meanwhile, the second classification method utilized a regression-method that estimated which parameter values would give a high-quality result.
According to the results of the research, the most important parameters for high-quality bioscaffolds were spacing, layering, pressure, material composition, and print speed. Print speed was the most important variable overall, followed by material composition. Its hoped that the results of the study will lead to better, faster printing of bioscaffolds, thereby enhancing the reliability of 3D printing body parts like cartilage, kneecaps, and jawbones.
According to Kavraki, the methods used by the research team have the potential to be used at other labs. As Kavraki was quoted by ScienceDaily:
“In the long run, labs should be able to understand which of their materials can give them different kinds of printed scaffolds, and in the very long run, even predict results for materials they have not tried. We don’t have enough data to do that right now, but at some point we think we should be able to generate such models.”
- Andrew Stein, Software Engineer Waymo – Interview Series
- Michael Schrage, Author of Recommendation Engines (The MIT Press) – Interview Series
- Scientists Detect Loneliness Through The Use Of AI And NLP
- Engineers Develop New Machine-Learning Method Capable of Cutting Energy Use
- Artificial Intelligence Enhances Speed of Discoveries For Particle Physics