Jeffrey Eyestone is CognitiveScale’s Healthcare AI Advisor. In this role, Jeff works with Healthcare organizations (primarily providers/healthcare systems, payers and technology vendors) on their AI journey—from strategic insight into how to develop AI competencies and centers of excellence to more tactical development of AI roadmaps and delivery of AI solutions
Can you discuss why you believe AI is so important for the healthcare industry?
Massive amounts of complex, disparate and distributed data form the foundation for the underlying clinical, administrative and financial processes that drive healthcare. So, AI is now a core capability across the entire healthcare information technology value chain. Knowledge workers like clinicians and researchers are realizing significant improvements from AI — especially the “augmented intelligence” subset of AI.
AI in healthcare is a huge topic and there are many valuable use cases from sourcing data to deriving insights, from improving or automating processes to better patient engagement and the delivery of personalized care. Clinical outcomes are improving, efficiencies are driving cost savings, and there are many more use cases in the works that promise to drive value. The current global pandemic has put many more healthcare AI use cases in sharp focus, combining community and individual risk scores and insights with intelligent interventions, for example – all of which will improve healthcare systems all over the world well after we have beaten COVID-19.
How would you define augmented intelligence in a healthcare setting?
CognitiveScale focuses primarily on augmented intelligence. Some AI technology like robotic process automation (RPA) or chatbots seek to replace people with machines (much of the time, anyway). We are focused, primarily, on AI solutions that help healthcare organizations and staff work smarter and more efficiently. We are also focused on cognitive solutions— the more advanced subset of AI that has a learning or feedback loop—and often it is the “humans in the loop” that can provide the feedback for models to learn from. There are numerous examples of augmented intelligence in healthcare that are starting to deliver impressive results. For example, in radiology and pathology, clinicians are augmenting their ability to read images and make diagnoses with machine learning models, enabling earlier detection or more accurate diagnoses and lab results.
You’ve worked with both startups and large organizations that have implemented different AI strategies. What are some common mistakes that you’ve seen?
Just as cognitive AI solutions learn over time and mature, so has our understanding of the power of AI, and the pitfalls. The amount of time to get access to data—to prep the data—and then to train models on a census of data has usually taken longer than originally thought. Other mistakes relate to the operationalization and scale of AI solutions—assuming that a good model can easily be deployed and managed across a healthcare IT ecosystem, when the reality is that many AI models remain unused. But one of the most significant challenges in healthcare AI relates to trust: can models be trusted, are they biased, fair, explainable and accurate? Numerous headlines have shown that AI solutions can be biased in ways that will get the attention of regulators—or they might be black box solutions that earn skepticism from the providers whose intelligence they are supposed to be augmenting. This is the biggest challenge—and mistake—with healthcare AI that I am seeing lately.
What are your views on genomic profiling?
Genomic profiling is a subset of promising technologies designed to deliver personalized insight into a person, usually for the purpose of their healthcare (vs. genealogy, or occasionally we see stories about paternity disputes or even crime investigations leveraging this technology). Personalization is a major topic of healthcare AI use cases—how to better engage patients, or augment the intelligence of providers with more personalized and directed insights. Insomuch as genomic profiling can help deliver more personalization. And, as long as the data and use of it is trusted (unbiased, fair, explainable, accurate, etc.), then it will be an important component of personalized medicine—and a foundational element of hyper-personalized AI solutions that leverage genetic information.
Personalizing healthcare seems to be the wave of the future, in what ways do you see this having the most positive impact?
At CognitiveScale we are delivering personalized, predictive, prescriptive healthcare solutions. A couple of examples include intelligent interventions for care managers (clinical use case) and predicting service inquiries (administrative / operational use case). Intelligent intervention solutions deliver personalized inferences, predictions and risk scores (among other model outputs) that augment the work of those managing patients through a care management program. We are also leveraging these capabilities for public health authorities, provider and health plans trying to manage citizens/patients/members through the COVID-19 crisis. By predicting service inquiries, we are helping healthcare organizations know the moment a member or provider calls about claims, benefits, etc., the specific reason for the call and how to much more efficiently resolve it, thereby driving cost savings and impacting satisfaction and retention. There are many more healthcare AI use cases focused on personalized solutions. We could write a book on this topic alone.
Can you talk about the challenges of aggregating data from disparate sources such as EMR, ERPs, patient data, external data sources, etc, into one coherent data system?
Healthcare Information Technology (HCIT) is almost always an ecosystem: a distributed network of disparate systems. A common example is the personal health record (PHR)—the complete data set of a patient’s medical record. Even when a large healthcare system is on one homogenous hospital information system, their patients will likely have other caregivers, they may have insurance that is another source of data, and their lab and pharmacy data may well be spread across several clinics and companies. While there are standard transaction sets for healthcare data exchange, common data models for storing clinical data (and member, patient, customer and provider data schemas), healthcare AI solution vendors often need to be able to demonstrate how solutions can leverage multiple of these at one time—internal and external data, data connectivity, and data schemas. Obviously, the foundation of healthcare AI solutions is data. So, data aggregating capabilities must be a core competence of any healthcare AI provider.
What are some of the considerations that are needed with data traceability?
Data traceability is a component of some larger, pressing issues in healthcare AI. For one, data traceability is one of several issues related to privacy, data use, and data exchange. For instance, where is clinical data or personal health information (PHI) going and how is it being used? These issues relate to regulatory and legal aspects of healthcare data security and privacy. These issues, then, are a subset of ethical and trusted AI. Ethical AI would need to account for data use, privacy, regulations and legal aspects, etc., specifically addressing ethical use of data. Trusted AI includes aspects of explainability and data use as well.
You are an advisor with CognitiveScale, can you explain what CognitiveScale does and how you advise them?
CognitiveScale is a provider of AI software that helps organizations build, operationalize, and scale cognitive AI solutions; realize the value of AI across their organizations; and, manage trust. In Healthcare, we work for some of the largest payer and provider organizations in the country, on a wide range of AI use cases, including more recent work in areas like intelligent interventions related to the COVID-19 pandemic and how these solutions will then improve care management, service experience, and more, once we are past this crisis. As our lead healthcare subject matter expert, I help clients and partners more strategically in areas like building out robust AI roadmaps, and more tactically in areas like value realization and optimization. I am also working to help in areas such as product development (healthcare-specific features and capabilities of our platform, for example) and thought leadership with a focus in on the highest-value healthcare AI solutions (given the size of the opportunity).
Could you define for us what the biggest issues are with how AI sometimes operates as a black box, and potential solutions for the healthcare industry?
As I mentioned, trusted, ethical AI is a big challenge—and trust is largely due to the “black box” problem: a lack of explainability or visibility, and skepticism about issues like bias, fairness, accuracy and robustness. At CognitiveScale, our Certiai solution specifically addresses this challenge and helps clients with an AI Trust Index and its component parts (each with their own score and insights): bias, fairness, explainability, robustness and accuracy. Healthcare has had examples of biased models, or clinician skepticism with model output due to a lack of transparency or explainability. There are also regulatory requirements around privacy and data use, and the use of models to deliver fair or unbiased results—and these have made it into the news. We are working with a number of technology and risk management organizations to develop trusted ways to provide visibility and improve confidence in “black box” AI solutions.
What are some ways that we can reduce ER over saturation through predictive A.I?
ER avoidance is really a subset of care optimization and personalized healthcare—the right care at the right time. This may well involve emergency care, but many times it does not. The recent COVID-19 crisis highlights a useful example where care optimization. For example, the right care for a high risk patient in a high risk community might include clinician outreach, access to a testing center, or in some cases, emergency care. Patients, members, providers and payers all want the right level of care at the right time in this crisis so a combination of AI solutions are helping deliver insights such as community and patient risk scores, spread analysis, hospital utilization predictions, and personalized guidance for specific people, among other solutions. We rate the performance of our care management solutions against a number of performance metrics like improved outcomes including ER avoidance when appropriate.
Thank you for the interview, readers who wish to learn more may visit CognitiveScale.
AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes
Artificial intelligence is playing a larger role in the science of genomics every day. Recently, a team of researchers from UC San Diego utilized AI to discover a DNA code that could pave the way for controlling gene activation. In addition, researchers from Australia’s national science organization, CSIRO, employed AI algorithms to analyze over one trillion genetic data points, advancing our understanding of the human genome and through localization of specific disease-causing genes.
The human genome, and all DNA, comprises four different chemical bases: adenine, guanine, thymine, and cytosine, abbreviated as A, G, T, and C respectively. These four bases are joined together in various combinations that code for different genes. Around one-quarter of all human genes are coded by genetic sequences that are roughly TATAAA, with slight variations. These TATAAA derivatives comprise the “TATA Box”, non-coding DNA sequences that play a role in the initialization of transcription for genes comprised of TATA.. It’s unknown how the other approximately 75% of the human genome is activated, however, thanks to the overwhelming number of possible base sequence combinations.
As reported by ScienceDaily, researchers from UCSD have managed to identify a DNA activation code that is employed as often as the TATA box activations, thanks to their use of artificial intelligence. The researchers refer to the DNA activation code as the “downstream core promoter region” (DPR). According to the senior author of the paper detailing the findings, UCSD Biological Sciences professor James Kagonaga, the discovery of the DPR reveals how somewhere between one quarter to one-third of our genes are activated.
Kadonaga initially discovered a gene activation sequence corresponding to portions of DPR when working with fruit flies in 1996. Since that time, Kadonaga and colleagues have been working on determining which DNA sequences were correlated with DPR activity. The research team began by creating half a million different DNA sequences and determining which sequences displayed DPR activity. Around 200,000 DNA sequences were used to train an AI model that could predict whether or not DPR activity would be witnessed within chunks of human DNA. The model was reportedly highly accurate. Kadonaga described the model’s performance as “absurdly good” and its predictive power “incredible”. The process used to create the model proved so reliable that the researchers ended up creating a similar AI focused on discovering new TATA box occurrences.
In the future, artificial intelligence could be leveraged to analyze DNA sequence patterns and give researchers more insight into how gene activation happens in human cells. Kadonaga believes that, much like how AI was able to help his team of researchers identify the DPR, AI will also assist other scientists in discovering important DNA sequences and structures.
In another use of AI to explore the human genome, as MedicalExpress reports, researchers from Australia’s CSIRO national science agency have used an AI platformed called VariantSpark in order to analyze over 1 trillion points of genomic data. It’s hoped that the AI-based research will help scientists determine the location of certain disease-related genes.
Traditional methods of analyzing genetic traits can take years to complete, but as CSIRO Bioinformatics leader Dr. Denis Bauser explained, AI has the potential to dramatically accelerate this process. VarianSpark is an AI platform that can analyze traits such as susceptibility to certain diseases and determine which genes may influence them. Bauer and other researchers made use of VariantSpark to analyze a synthetic dataset of around 100,000 individuals in just 15 hours. VariantSpark analyzed over ten million variants of one trillion genomic data points, a task that would take even the fastest competitors using traditional methods thousands of years to complete.
As Dr. David Hansin, CEO of CSIRO Australian E-Health Research Center explained via MedicalExpress:
“Despite recent technology breakthroughs with whole-genome sequencing studies, the molecular and genetic origins of complex diseases are still poorly understood which makes prediction, application of appropriate preventive measures and personalized treatment difficult.”
Bauer believes that VariantSpark can be scaled up to population-level datasets and help determine the role genes play in the development cardiovascular disease and neuron diseases. Such work could lead to early intervention, personalized treatments, and better health outcomes generally.
Research Shows How AI Can Help Reduce Opioid Use After Surgery
Research coming out of the University of Pennsylvania School of Medicine last month demonstrated how artificial intelligence (AI) can be utilized to fight against opioid abuse. It focused on a chatbot which sent reminders to patients who underwent surgery to fix major bone fractures.
The research was published in the Journal of Medical Internet Research.
Christopher Anthony, MD, is the study’s lead author and the associate director of Hip Preservation at Penn Medicine. He is also an assistant professor of Orthopaedic Surgery.
“We showed that opioid medication utilization could be decreased by more than a third in an at-risk patient population by delivering psychotherapy via a chatbot,” he said. “While it must be tested with future investigations, we believe our findings are likely transferable to other patient populations.”
Opioid Use After Surgery
Opioids are an effective treatment for pain following a severe injury, such as a broken arm or leg, but the large prescription of the drugs can lead to addiction and dependence for many users. This is what has caused the major opioid epidemic throughout the United States.
The team of researchers believe that a patient-centered approach with the use of the AI chatbot can help reduce the number of opioids taken after such surgerys, which can be a tool used against the epidemic.
Those researchers also included Edward Octavio Rojas, MD, who is a resident in Orthopaedic Surgery at the University of Iowa Hospitals & Clinics. The co-authors included: Valerie Keffala, PhD; Natalie Ann Glass, PhD; Benjamin J. Miller, MD; Mathew Hogue, MD; Michael Wiley, MD; Matthew Karam, MD; John Lawrence Marsh, MD, and Apurva Shah, MD.
The research involved 76 patients who visited a Level 1 Trauma Center at the University of Iowa Hospitals & Clinics. They were there to receive treatment for fractures that required surgery, and those patients were separated into two groups. Both groups received the same prescription for opioids to treat pain, but only one of the groups received daily text messages from the automated chatbot.
The group that received text messages could expect two per day for a period of two weeks following their procedure. The automated chatbot relied on artificial intelligence to send the messages, which went out the day after surgery. The text messages were constructed in a way to help patients focus on coping better with the medication.
The text messages, which were created by a pain psychologist specialized in pain and commitment therapy (ACT), did not directly go against the use of the medication, but they attempted to help the patients think of something other than taking a pill.
Six Core Principles
The text messages could be broken down into six “core principles,” : Values, Acceptance, Present Moment Awareness, Self-As-Context, Committed Action, and Diffusion.
One message under the Acceptance principle was: “feelings of pain and feelings about your experience of pain are normal after surgery. Acknowledge and accept these feelings as part of the recovery process. Remember how you feel now is temporary and your healing process will continue. Call to mind pleasant feelings or thoughts you experienced today.”
The results showed that the patients who did not receive the automated messages took, on average, 41 opioid pills following the surgeries, while the group who did receive the messages averaged 26. The 37 percent difference was impressive, and those who received messages also reported less overall pain two weeks after the surgery.
The automated messages were not personalized for each individual, which demonstrates success without over-personalization.
“A realistic goal for this type of work is to decrease opioid utilization to as few tablets as possible, with the ultimate goal to eliminate the need for opioid medication in the setting of fracture care,” Anthony said.
The study received funding by a grant from the Orthopaedic Trauma Association.
Samsung Medison & Intel Collaborate to Improve Fetal Safety
According to the World Health Organization, approximately 295,000 women died during and following pregnancy and childbirth in 2017, even as maternal mortality rates have been decreasing. While every pregnancy and birth is unique, most maternal deaths are preventable. Research from the Perinatal Institute found that tracking fetal growth is essential for good prenatal care and can help prevent stillbirths when physicians are able to recognize growth restrictions.
Samsung Medison and Intel are collaborating on new smart workflow solutions to improve obstetric measurements that contribute to maternal and fetal safety and can help save lives. Using an Intel® Core™ i3 processor, the Intel® Distribution of OpenVINO™ toolkit and OpenCV toolkit, Samsung Medison’s BiometryAssist™ automates and simplifies fetal measurements, while LaborAssist™ automatically estimates the fetal angle of progression (AoP) during labor for a complete understanding of a patient’s birthing progress, without the need for invasive digital vaginal exams.
According to Professor Jayoung Kwon, MD PhD, Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, Yonsei University College of Medicine, Yonsei University Health System in Seoul, Korea: “Samsung Medison’s BiometryAssist is a semi-automated fetal biometry measurement system that automatically locates the region of interest and places a caliper for fetal biometry, demonstrating a success rate of 97% to 99% for each parameter.Such high efficacy enables its use in the current clinical practice with high precision.”
“At Intel, we are focused on creating and enabling world-changing technology that enriches the lives of every person on Earth,” said Claire Celeste Carnes, strategic marketing director, Health and Life Sciences, Intel. “We are working with companies like Samsung Medison to adopt the latest technologies in ways that enhance the patient safety and improve clinical workflows, in this case for the important and time-sensitive care provided during pregnancy and delivery.”
How It Works
BiometryAssist automates and standardizes fetal measurements in approximately 85 milliseconds with a single click, providing over 97% accuracy. This enables doctors to allocate more time to talking with their patients while also standardizing fetal measurements, which have historically proved challenging to provide with accuracy. With BiometryAssist, physicians can quickly verify consistent measurements for high volumes of patients.
“Samsung is working to improve the efficiency of new diagnostic features, as well as healthcare services, and the Intel Distribution of OpenVINO toolkit and OpenCV toolkit have been a great ally in reaching these goals,” said Won-Chul Bang, corporate vice president and head of Product Strategy, Samsung Medison.
During labor, LaborAssist helps physicians estimate fetal AOP and head direction. This enables both the physician and patient to understand the fetal descent and labor process and determine the best method for delivery. There is always risk with delivery and a slowing progress could result in issues for the baby. Obtaining more accurate and real-time progression of labor can help physicians determine the best mode of delivery and potentially help reduce the number of unnecessary cesarean sections.
“LaborAssist provides automatic measurement of the angle of progression as well as information pertaining to fetal head direction and estimated head station. So it is useful for explaining to the patient and her family how the labor is progressing, using ultrasound images which show the change of head station during labor. It is expected to be of great assistance in the assessment of labor progression and decision-making for delivery,” said Professor Min Jeong Oh, MD, PhD, Department of Obstetrics and Gynecology, Korea University Guro Hospital in Seoul, Korea.
BiometryAssist and LaborAssist are already in use in 80 countries, including the United States, Korea, Italy, France, Brazil and Russia. The solutions received Class 2 clearance by the FDA in 2020.
Intel and Samsung Medison will continue to collaborate to advance the state of the art in ultrasounds by accelerating AI and leveraging advanced technology in Samsung Medison’s next-generation ultrasound solutions, including Nerve Tracking, SW Beamforming and AI Module.
- Dimitris Vassos, CEO, Co-founder, and Chief Architect of Omilia – Interview Series
- Human Brain’s Light Processing Ability Could Lead to Better Robotic Sensing
- Game Developers Look To Voice AI For New Creative Opportunities
- Udacity Launches RPA Developer Nanodegree Program in Conjunction with UiPath
- AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes