Connect with us

Interviews

Jeffrey Eyestone, Healthcare AI Advisor at CognitiveScale – Interview Series

mm

Published

 on

Jeffrey Eyestone is CognitiveScale’s Healthcare AI Advisor. In this role, Jeff works with Healthcare organizations (primarily providers/healthcare systems, payers and technology vendors) on their AI journey—from strategic insight into how to develop AI competencies and centers of excellence to more tactical development of AI roadmaps and delivery of AI solutions

Can you discuss why you believe AI is so important for the healthcare industry? 

Massive amounts of complex, disparate and distributed data form the foundation for the underlying clinical, administrative and financial processes that drive healthcare. So, AI is now a core capability across the entire healthcare information technology value chain. Knowledge workers like clinicians and researchers are realizing significant improvements from AI — especially the “augmented intelligence” subset of AI.

AI in healthcare is a huge topic and there are many valuable use cases from sourcing data to deriving insights, from improving or automating processes to better patient engagement and the delivery of personalized care. Clinical outcomes are improving, efficiencies are driving cost savings, and there are many more use cases in the works that promise to drive value. The current global pandemic has put many more healthcare AI use cases in sharp focus, combining community and individual risk scores and insights with intelligent interventions, for example – all of which will improve healthcare systems all over the world well after we have beaten COVID-19.

 

How would you define augmented intelligence in a healthcare setting?

CognitiveScale focuses primarily on augmented intelligence. Some AI technology like robotic process automation (RPA) or chatbots seek to replace people with machines (much of the time, anyway). We are focused, primarily, on AI solutions that help healthcare organizations and staff work smarter and more efficiently.  We are also focused on cognitive solutions— the more advanced subset of AI that has a learning or feedback loop—and often it is the “humans in the loop” that can provide the feedback for models to learn from. There are numerous examples of augmented intelligence in healthcare that are starting to deliver impressive results. For example, in radiology and pathology, clinicians are augmenting their ability to read images and make diagnoses with machine learning models, enabling earlier detection or more accurate diagnoses and lab results.

 

You’ve worked with both startups and large organizations that have implemented different AI strategies. What are some common mistakes that you’ve seen?

Just as cognitive AI solutions learn over time and mature, so has our understanding of the power of AI, and the pitfalls. The amount of time to get access to data—to prep the data—and then to train models on a census of data has usually taken longer than originally thought. Other mistakes relate to the operationalization and scale of AI solutions—assuming that a good model can easily be deployed and managed across a healthcare IT ecosystem, when the reality is that many AI models remain unused. But one of the most significant challenges in healthcare AI relates to trust: can models be trusted, are they biased, fair, explainable and accurate?  Numerous headlines have shown that AI solutions can be biased in ways that will get the attention of regulators—or they might be black box solutions that earn skepticism from the providers whose intelligence they are supposed to be augmenting. This is the biggest challenge—and mistake—with healthcare AI that I am seeing lately.

 

What are your views on genomic profiling?

Genomic profiling is a subset of promising technologies designed to deliver personalized insight into a person, usually for the purpose of their healthcare (vs. genealogy, or occasionally we see stories about paternity disputes or even crime investigations leveraging this technology).  Personalization is a major topic of healthcare AI use cases—how to better engage patients, or augment the intelligence of providers with more personalized and directed insights. Insomuch as genomic profiling can help deliver more personalization. And, as long as the data and use of it is trusted (unbiased, fair, explainable, accurate, etc.), then it will be an important component of personalized medicine—and a foundational element of hyper-personalized AI solutions that leverage genetic information.

 

Personalizing healthcare seems to be the wave of the future, in what ways do you see this having the most positive impact?

At CognitiveScale we are delivering personalized, predictive, prescriptive healthcare solutions. A couple of examples include intelligent interventions for care managers (clinical use case) and predicting service inquiries (administrative / operational use case). Intelligent intervention solutions deliver personalized inferences, predictions and risk scores (among other model outputs) that augment the work of those managing patients through a care management program. We are also leveraging these capabilities for public health authorities, provider and health plans trying to manage citizens/patients/members through the COVID-19 crisis. By predicting service inquiries, we are helping healthcare organizations know the moment a member or provider calls about claims, benefits, etc., the specific reason for the call and how to much more efficiently resolve it, thereby driving cost savings and impacting satisfaction and retention. There are many more healthcare AI use cases focused on personalized solutions. We could write a book on this topic alone.

 

Can you talk about the challenges of aggregating data from disparate sources such as EMR, ERPs, patient data, external data sources, etc, into one coherent data system?

Healthcare Information Technology (HCIT)  is almost always an ecosystem: a distributed network of disparate systems. A common example is the personal health record (PHR)—the complete data set of a patient’s medical record. Even when a large healthcare system is on one homogenous hospital information system, their patients will likely have other caregivers, they may have insurance that is another source of data, and their lab and pharmacy data may well be spread across several clinics and companies.  While there are standard transaction sets for healthcare data exchange, common data models for storing clinical data (and member, patient, customer and provider data schemas), healthcare AI solution vendors often need to be able to demonstrate how solutions can leverage multiple of these at one time—internal and external data, data connectivity, and data schemas. Obviously, the foundation of healthcare AI solutions is data. So, data aggregating capabilities must be a core competence of any healthcare AI provider.

 

What are some of the considerations that are needed with data traceability?

Data traceability is a component of some larger, pressing issues in healthcare AI. For one, data traceability is one of several issues related to privacy, data use, and data exchange. For instance, where is clinical data or personal health information (PHI) going and how is it being used? These issues relate to regulatory and legal aspects of healthcare data security and privacy. These issues, then, are a subset of ethical and trusted AI. Ethical AI would need to account for data use, privacy, regulations and legal aspects, etc., specifically addressing ethical use of data. Trusted AI includes aspects of explainability and data use as well.

 

You are an advisor with CognitiveScale, can you explain what CognitiveScale does and how you advise them?

CognitiveScale is a provider of AI software that helps organizations build, operationalize, and scale cognitive AI solutions; realize the value of AI across their organizations; and, manage trust. In Healthcare, we work for some of the largest payer and provider organizations in the country, on a wide range of AI use cases, including more recent work in areas like intelligent interventions related to the COVID-19 pandemic and how these solutions will then improve care management, service experience, and more, once we are past this crisis. As our lead healthcare subject matter expert, I help clients and partners more strategically in areas like building out robust AI roadmaps, and more tactically in areas like value realization and optimization. I am also working to help in areas such as product development (healthcare-specific features and capabilities of our platform, for example) and thought leadership with a focus in on the highest-value healthcare AI solutions (given the size of the opportunity).

 

Could you define for us what the biggest issues are with how AI sometimes operates as a black box, and potential solutions for the healthcare industry?

As I mentioned, trusted, ethical AI is a big challenge—and trust is largely due to the “black box” problem: a lack of explainability or visibility, and skepticism about issues like bias, fairness, accuracy and robustness. At CognitiveScale, our Certiai solution specifically addresses this challenge and helps clients with an AI Trust Index and its component parts (each with their own score and insights): bias, fairness, explainability, robustness and accuracy. Healthcare has had examples of biased models, or clinician skepticism with model output due to a lack of transparency or explainability.  There are also regulatory requirements around privacy and data use, and the use of models to deliver fair or unbiased results—and these have made it into the news. We are working with a number of technology and risk management organizations to develop trusted ways to provide visibility and improve confidence in “black box” AI solutions.

 

What are some ways that we can reduce ER over saturation through predictive A.I?

ER avoidance is really a subset of care optimization and personalized healthcare—the right care at the right time. This may well involve emergency care, but many times it does not. The recent COVID-19 crisis highlights a useful example where care optimization. For example, the right care for a high risk patient in a high risk community might include clinician outreach, access to a testing center, or in some cases, emergency care.  Patients, members, providers and payers all want the right level of care at the right time in this crisis so a combination of AI solutions are helping deliver insights such as community and patient risk scores, spread analysis, hospital utilization predictions, and personalized guidance for specific people, among other solutions. We rate the performance of our care management solutions against a number of performance metrics like improved outcomes including ER avoidance when appropriate.

Thank you for the interview, readers who wish to learn more may visit CognitiveScale.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.

Healthcare

Cognoa Seeks FDA Clearance for Digital Autism Diagnostic Device After Successful Study

mm

Published

 on

 Cognoa, the leading pediatric behavioral health company developing diagnostic and therapeutic solutions for children living with autism and other behavioral health conditions, announced today that after surpassing all FDA targets in the pivotal study, the company will be submitting its autism spectrum disorder (ASD) diagnostic to the FDA for clearance. Cognoa’s diagnostic was previously granted Breakthrough Device Designation by the FDA in October 2018.

Cognoa seeks to introduce a new, efficient and accurate approach to diagnosing ASD in the primary care setting, using artificial intelligence (AI) to provide a new paradigm of care that empowers pediatricians. Currently, pediatricians refer most children with suspected developmental delay to specialists to diagnose and prescribe treatment. This  often results in children and families facing an arduous process, forcing families to wait months or even years before their child receives an initial diagnosis of ASD and can start life-changing therapy. Cognoa’s solution is positioned to fundamentally change this standard of care by reducing wait times to diagnosis, thereby allowing early intervention  to begin during critical neurodevelopmental windows. Early intervention has shown to improve lifelong outcomes for children and their families living with autism.

“The data from our pivotal study was strong, and we are incredibly excited to submit a de novo request  for FDA clearance of Cognoa’s ASD Diagnostic,” said David Happel, CEO of Cognoa. “The accuracy of our autism diagnostic solution is unparalleled, exceeding all pre-specified endpoints, and we are looking forward to a priority review. Cognoa’s mission is to improve the lives of children and families living with autism and helping pediatricians  diagnose autism within the primary care setting is a vital first step.”

If cleared by the FDA, Cognoa’s ASD Diagnostic will be crucial in helping the approximately 64,000 general pediatricians across the U.S. rule-out or diagnose autism – enabling early intervention and supporting improved life-long outcomes for children, in line with the American Academy of Pediatrics (AAP) updated ASD guidelines as of January 2020. This will streamline the autism care journey for children and families, as specialists will now be able to focus on children with more complex diagnoses.

“There is a significant unmet need for early ASD diagnosis in the pediatric primary care setting,” said Dr. Colleen Kraft, former AAP President and Senior Medical Director of Clinical Adoption at Cognoa. “A clinically validated, FDA-cleared digital assessment platform would empower pediatricians to take definitive action on parental concerns. They would be able to diagnose ASD much more efficiently, with actionable information to drive the clinical management of the 1 in every 54 children with ASD and ensure that these children receive access to the appropriate care and treatment.”

 

The Pivotal Study

Cognoa’s ASD Diagnostic surpassed its targeted benchmarks in a trial involving 425 participants – aged between 18 to 72 months – whose caregivers or pediatricians had expressed concern about their development but who were never formally evaluated or diagnosed with autism.

The pivotal study ran from July 2019 through May 2020 and was a multi-site, prospective, double-blinded, active comparator, cohort study conducted at 14 sites across the U.S. The study evaluated the ability of Cognoa’s ASD Diagnostic device to aid in the diagnosis of ASD by comparing its diagnostic output with the clinical reference standard, consisting of a diagnosis made by a specialist clinician, based on DSM-5 criteria and validated by one or more reviewing specialist clinicians. This approach was taken to effectively evaluate the accuracy of Cognoa’s investigational device as measured by how often in the study population it correctly identifies a patient with ASD, and how frequently it correctly determines that a patient does not have ASD.

As part of the study, caregivers provided information about their child’s behavior by completing a questionnaire and uploading two short videos using Cognoa’s mobile app. In addition, participating children and their caregiver completed two doctor’s appointments (one with a primary care physician and one with a pediatric specialist). A number of the primary care appointments were completed via telemedicine, with the study finding that the investigational device performed equally well when administered remotely. The trial also showed that Cognoa’s diagnostic device is highly accurate across males and females as well as ethnic and racial backgrounds, thus addressing a longstanding issue of disparities in autism diagnoses.

The pivotal study results are being prepared for publication in a peer-reviewed journal.

Spread the love
Continue Reading

Healthcare

AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes

mm

Published

 on

Artificial intelligence is playing a larger role in the science of genomics every day. Recently, a team of researchers from UC San Diego utilized AI to discover a DNA code that could pave the way for controlling gene activation. In addition, researchers from Australia’s national science organization, CSIRO, employed AI algorithms to analyze over one trillion genetic data points, advancing our understanding of the human genome and through localization of specific disease-causing genes.

The human genome, and all DNA, comprises four different chemical bases: adenine, guanine, thymine, and cytosine, abbreviated as A, G, T, and C respectively. These four bases are joined together in various combinations that code for different genes. Around one-quarter of all human genes are coded by genetic sequences that are roughly TATAAA, with slight variations. These TATAAA derivatives comprise the “TATA Box”, non-coding DNA sequences that play a role in the initialization of transcription for genes comprised of TATA.. It’s unknown how the other approximately 75% of the human genome is activated, however, thanks to the overwhelming number of possible base sequence combinations.

As reported by ScienceDaily, researchers from UCSD have managed to identify a DNA activation code that is employed as often as the TATA box activations, thanks to their use of artificial intelligence. The researchers refer to the DNA activation code as the “downstream core promoter region” (DPR).  According to the senior author of the paper detailing the findings, UCSD Biological Sciences professor James Kagonaga, the discovery of the DPR reveals how somewhere between one quarter to one-third of our genes are activated.

Kadonaga initially discovered a gene activation sequence corresponding to portions of DPR when working with fruit flies in 1996. Since that time, Kadonaga and colleagues have been working on determining which DNA sequences were correlated with DPR activity. The research team began by creating half a million different DNA sequences and determining which sequences displayed DPR activity. Around 200,000 DNA sequences were used to train an AI model that could predict whether or not DPR activity would be witnessed within chunks of human DNA. The model was reportedly highly accurate. Kadonaga described the model’s performance as “absurdly good” and its predictive power “incredible”.  The process used to create the model proved so reliable that the researchers ended up creating a similar AI focused on discovering new TATA box occurrences.

In the future, artificial intelligence could be leveraged to analyze DNA sequence patterns and give researchers more insight into how gene activation happens in human cells. Kadonaga believes that, much like how AI was able to help his team of researchers identify the DPR, AI will also assist other scientists in discovering important DNA sequences and structures.

In another use of AI to explore the human genome, as MedicalExpress reports, researchers from Australia’s CSIRO national science agency have used an AI platformed called VariantSpark in order to analyze over 1 trillion points of genomic data. It’s hoped that the AI-based research will help scientists determine the location of certain disease-related genes.

Traditional methods of analyzing genetic traits can take years to complete, but as CSIRO Bioinformatics leader Dr. Denis Bauser explained, AI has the potential to dramatically accelerate this process. VarianSpark is an AI platform that can analyze traits such as susceptibility to certain diseases and determine which genes may influence them. Bauer and other researchers made use of VariantSpark to analyze a synthetic dataset of around 100,000 individuals in just 15 hours. VariantSpark analyzed over ten million variants of one trillion genomic data points, a task that would take even the fastest competitors using traditional methods thousands of years to complete.

As Dr. David Hansin, CEO of CSIRO Australian E-Health Research Center explained via MedicalExpress:

“Despite recent technology breakthroughs with whole-genome sequencing studies, the molecular and genetic origins of complex diseases are still poorly understood which makes prediction, application of appropriate preventive measures and personalized treatment difficult.”

Bauer believes that VariantSpark can be scaled up to population-level datasets and help determine the role genes play in the development cardiovascular disease and neuron diseases. Such work could lead to early intervention, personalized treatments, and better health outcomes generally.

Spread the love
Continue Reading

Healthcare

Research Shows How AI Can Help Reduce Opioid Use After Surgery

Published

 on

Research coming out of the University of Pennsylvania School of Medicine last month demonstrated how artificial intelligence (AI) can be utilized to fight against opioid abuse. It focused on a chatbot which sent reminders to patients who underwent surgery to fix major bone fractures. 

The research was published in the Journal of Medical Internet Research

Christopher Anthony, MD, is the study’s lead author and the associate director of Hip Preservation at Penn Medicine. He is also an assistant professor of Orthopaedic Surgery. 

“We showed that opioid medication utilization could be decreased by more than a third in an at-risk patient population by delivering psychotherapy via a chatbot,” he said. “While it must be tested with future investigations, we believe our findings are likely transferable to other patient populations.”

Opioid Use After Surgery

Opioids are an effective treatment for pain following a severe injury, such as a broken arm or leg, but the large prescription of the drugs can lead to addiction and dependence for many users. This is what has caused the major opioid epidemic throughout the United States. 

The team of researchers believe that a patient-centered approach with the use of the AI chatbot can help reduce the number of opioids taken after such surgerys, which can be a tool used against the epidemic. 

Those researchers also included Edward Octavio Rojas, MD, who is a resident in Orthopaedic Surgery at the University of Iowa Hospitals & Clinics. The co-authors included: Valerie Keffala, PhD; Natalie Ann Glass, PhD; Benjamin J. Miller, MD; Mathew Hogue, MD; Michael Wiley, MD; Matthew Karam, MD; John Lawrence Marsh, MD, and Apurva Shah, MD. 

The Experiment

The research involved 76 patients who visited a Level 1 Trauma Center at the University of Iowa Hospitals & Clinics. They were there to receive treatment for fractures that required surgery, and those patients were separated into two groups. Both groups received the same prescription for opioids to treat pain, but only one of the groups received daily text messages from the automated chatbot. 

The group that received text messages could expect two per day for a period of two weeks following their procedure. The automated chatbot relied on artificial intelligence to send the messages, which went out the day after surgery. The text messages were constructed in a way to help patients focus on coping better with the medication. 

The text messages, which were created by a pain psychologist specialized in pain and commitment therapy (ACT), did not directly go against the use of the medication, but they attempted to help the patients think of something other than taking a pill.

Six Core Principles

The text messages could be broken down into six “core principles,” : Values, Acceptance, Present Moment Awareness, Self-As-Context, Committed Action, and Diffusion.

One message under the Acceptance principle was: “feelings of pain and feelings about your experience of pain are normal after surgery. Acknowledge and accept these feelings as part of the recovery process. Remember how you feel now is temporary and your healing process will continue. Call to mind pleasant feelings or thoughts you experienced today.” 

The results showed that the patients who did not receive the automated messages took, on average, 41 opioid pills following the surgeries, while the group who did receive the messages averaged 26. The 37 percent difference was impressive, and those who received messages also reported less overall pain two weeks after the surgery. 

The automated messages were not personalized for each individual, which demonstrates success without over-personalization.

“A realistic goal for this type of work is to decrease opioid utilization to as few tablets as possible, with the ultimate goal to eliminate the need for opioid medication in the setting of fracture care,” Anthony said. 

The study received funding by a grant from the Orthopaedic Trauma Association. 

Spread the love
Continue Reading