Dave Ryan leads the Global Health & Life Sciences business unit at Intel that focuses on digital transformation from edge-to-cloud in order to make precision, value-based care a reality. His customers are the manufacturers who build life sciences instruments, medical equipment, clinical systems, compute appliances and devices used by research centers, hospitals, clinics, residential care settings and the home. Dave has served on the boards of Consumer Technology Association Health & Fitness Division, HIMSS’ Personal Connected Health Alliance, the Global Coalition on Aging and the Alliance for Connected Care.
What is Intel’s Health & Life Sciences Business?
Intel’s Health & Life Sciences business helps customers create solutions in the areas of medical imaging, clinical systems, and lab and life sciences, enabling distributed, intelligent, and personalized care.
Intel’s Health business focuses on population health, medical imaging, clinical systems, and digital infrastructure.
- Population Health examines diverse patient data to give providers insights into risks for medical issues and improved treatments across cohorts. Optimized and tuned ML and AI helps “tier” groups, so payers and providers prioritize patients at most risk.
- Medical Imaging (e.g. MRI, CT), generate enormous data sets requiring accurate evaluation with no room for error. HPC and AI help more quickly scan image data and identify critical factors to assist radiologists in diagnosis.
- Clinical Systems use computer vision, AI, HPC and edge computing for patient monitoring, robotic surgery, and telehealth and many others. These intelligent systems reconcile diverse source data for a complete patient view and better diagnosis, with flexibility and scalability to support changing organizational needs.
- Digital Infrastructure integrates many technologies to enable novel approaches to patient interaction including anywhere anytime care where clinicians collaborate across space and time for condition management, surgery, and analytics.
Intel’s Lab and Life Sciences business is focused in 3 primary areas: Data Analytics, ‘Omics, and Pharma.
- Data Analytics uses AI to drive a cascade of discoveries and insights that help enable, among other things, precision medicine by ensuring that patients get the drugs that are most effective for them, and so reducing the risk of side effect profiles.
- ‘omics describes and quantifies biological molecule groups, using bioinformatics and computational biology. The massive data sets involved here require high-throughput processing to receive results within reasonable timeframes. With this throughput and new databases, toolkits, libraries, and code optimizations, ‘omics institutions can reduce time to results and development costs.
- Pharma is the study of drugs and how they interact with human biological systems, including at a molecular level where data science needs AI and ML to assist with lead generation and optimizations, target ID and preclinical research. This results in better clinical trials, smarter reaction insights and faster new drug discovery.
When did you personally initially become interested in using AI for the benefit of healthcare?
The proliferation of AI across many industries has largely been about automating those tasks routinely performed by humans. In healthcare, AI has become a tool through which we augment or assist, not replace, existing human expertise to deliver truly transformative approaches to diagnosis and treatment. And nowhere is this clearer than in medical imaging, in which data volume and complexity is both barrier and opportunity. Today, AI, and inferencing in particular, is able to perform more rapid and detailed scans of vast arrays of information than any human can and in so doing not only reveals insights previously hidden but also maximizes the valuable time of the radiologist to give reach a better diagnostic conclusion and for more patients. For example, AI solutions from customers help radiologists by analyzing data in X-rays which could indicate the presence of a collapsed lung (pneumothorax) or COVID. That is a truly remarkable achievement that is revolutionizing the efficacy of both medical imaging itself and how the human expertise is applied. Witnessing that kind of transformation in this one field naturally motivates one to seek out the next great leap in other health and life sciences pursuits where man and machine combine to produce a new whole so much greater the sum of the parts. Taking that a step further is the idea that AI can democratize knowledge across care disciplines and make scarce human expertise and experienced-based nuance go even further, raising the level of quality.
How important is AI to analyzing big data in a clinical setting?
The Health and Life Sciences industries generate more data with greater complexity than any other single industry in the world today. And unlike other industries, effectively managing and analyzing that data is a matter of life and death. Given these magnitudes, AI is now an indispensable enabler of a range of needs, both mundane and breakthrough, in both the clinical and lab settings to address the industry’s Triple Aim: Improve care quality and access while lowering costs.
For example, electronic health records (EHR) have enabled a digital revolution in the quality and efficiency of care delivery. Unfortunately, within these records is a messy mix of both unstructured and structured data which AI can help digitize into more unified and useful data sets. Optical character recognition (OCR) and natural language processing (NLP) are just two AI-enabled models that can convert the analogs of handwriting and voice into EHR data. And once digitized, AI can be applied across these data sets in many exciting use cases.
In other instances, data captured from medical devices and cameras is growing and, when combined with patient history data, analytics can help drive new insights to further personalize treatment. At a census level, many hospitals have already deployed algorithms that can predict sepsis onset for quicker intervention, and in ICUs, software can combine data across multiple isolated devices to create an impressively complete picture of that patient in near-real-time. Over time, all that captured and stored data can also be analyzed for better predictions in the future.
What are some of the more notable use cases that you are seeing for machine learning analyzing this data?
As mentioned above, NLP tools can help replace manual scribing or data entry to generate new documents, like patient visit summaries and detailed clinical notes. This enables clinicians to see more patients, and providers to improve documentation, workflow, and billing accuracy by entering orders and documentation sooner in the day.
More broadly, AI-enabled analytics help providers understand and manage a wide range of clinical applications that improve efficiency and lower costs. This allows hospitals to better manage resources and fine tune best practices, and care teams to collaborate on diagnoses and coordinate treatments and overall care they deliver to improve patient outcomes.
Clinicians can analyze for targeted abnormalities using appropriate ML approaches and filter out structured information from other raw data. This can lead to quicker and more accurate diagnosis and optimal treatments. For example, ML algorithms can convert the diagnostic system of medical images into automated decision-making by converting images to machine readable text. ML and pattern recognition techniques can also draw insights from massive volumes of clinical image data, unmanageable by human alone, to transform the diagnosis, treatment and monitoring of patients.
To assess and manage population health, ML algorithms can help predict future risk trajectories, identify risk drivers, and provide solutions for best outcomes. Deep learning modules integrated with AI technologies allow the researchers to interpret complex genomic data sets, to predict specific types of cancer (based on the gene expression profiles obtained from various large data sets) and identify multiple druggable targets.
Could you elaborate on how Intel is collaborating with the genomics community to transform large datasets into biomedical insights that accelerate personalized care?
Precision medicine supplies individual-level health data sources that enable better selection of disease targets and identification of patient populations that demonstrate improved clinical outcomes to novel preventative and therapeutic approaches.
Genomics is the cornerstone of this precision medicine. It provides the blueprint of who we are, and why and how we are unique which is critical for providers to understand as they combine this information with other data (images, clinical chemistry, medical history, cohort data, etc.). Clinicians use this information to develop and deliver patient-specific treatments that are lower risk and more effective.
Intel is collaborating with the genomics community by optimizing the most commonly used genetic analysis tools used in the industry to run best and across Intel architecture-based platforms and the processors that power them. For example, optimization of the Broad Institute’s industry leading genetic variant software, the Genomic Analysis Toolkit (GATK), on Intel hardware using OpenVINO to ease AI model development debug and scalable deployment, highlights our impact and commitment to this industry. The GATK toolkit provides benefits to biomedical research such as Genomics DB which efficiently stores files ~200GB in size (typical for genomic datasets) and the Genome Kernel Library running AVX512 which takes advantage of specific Intel architecture hardware instructions to accelerate genomic workloads and AI utilization.
Accelerating the speed and reducing the cost of genomic analysis while maintaining the accuracy of that analysis, continues to be compelling to biomedical and other life sciences researchers as they use Intel compute solutions to discover and harness new medical insights.
Could you discuss why you believe that remote healthcare is so important?
The Health industry has been working on various forms and aspects of remote care for many years. The reasons for this have been, up until recently, an intuitive and hoped for belief that remote care can be for many care delivery situations, as good as or better than traditional in-clinic models. Now spurred by the pandemic crisis and its impact, health care delivery systems around the world are forced to adopt telehealth or collapse. This sudden rush to implement is now proving those long held beliefs to be true and care at a distance to be both vital and highly viable.
Remote care has many benefits. Patient comfort and satisfaction with telehealth care delivery is rising rapidly. They are able to remain calmer and at ease in their home with less disruption and time/schedule impact. Providers like it because it allows them to see more patients, and better manage their own time and better allocate scare clinical resources. And of course, what has become the clearest and most compelling reason these past few months for everyone is the inherent ability of remote care to limit contagion and the need for in-person contact when a video chat with augmented device and compute telemetry can get most care delivery tasks done just as well.
Can you discuss some of the technologies that are currently being used for remote patient monitoring?
There are several critical technology elements. The most important is ease of use for the patient, quickly followed by security and privacy of the data, and the robustness of the application and the data it captures. For example, we need to prevent a user from deleting a monitoring app from her iPad by accident.
Another critical aspect for a care provider deploying across multiple patients is fleet management and the ability to send updates or tech support down the wire and tailored to each user or cohort of user. This requires:
- standardization of the data exchange and privacy with industry standards such as FHIR and Continua;
- secure and power-efficient compute platform to orchestrate the data and communicate it back to the clinician including appropriate software and encryption;
- connectivity through a cellular network to make the user devices stand-alone and not dependent on Wi-Fi at home that may be unreliable or even non-existent;
- cloud storage and analytics on the backend.
In addition, the ability to gather and aggregate the data flowing in from users is fundamental to enabling clinicians to do patient monitoring and support, and for the software and analytics to inform care teams of a nominal state or initiate an alarm notification for results that are out of tolerance.
We believe that AI will play a much larger role in patient monitoring moving forward, improving the patient experience through natural voice surveys (“How are you feeling today?”, “Your blood pressure seems a bit high”) and allowing care teams to better understand a patient’s health and identify appropriate treatments. Through the use of AI models, population health management will also progress with all patient data folding into ever larger data sets which improve accuracy of an iterative learning model. This is essential for remote monitoring at scale.
What are some of the problems that need to be overcome to increase the success rate of remote healthcare?
Many of the same issues that plague our current system of traditional care delivery are also factors in enhancing or inhibiting the success of remote care. These include societal sub-segment beliefs and stigmas surrounding healthcare, or socio-economic barriers stemming from lack of insurance, technology fluency, required devices, and connectivity. Data silos prevent maximizing value that larger shared data sets could produce especially now that our ability to harness learning programs is truly emerging.
But there are challenges that are unique to remote care:
- policy and payment issues, though much improved of late, must continue their positive momentum to expand with relaxed restrictions on what is allowable and reimbursable under via remote care modality;
- financial challenges and lack capital to invest in technology in health care requires a conversion from a CapEx model to an OpEx model. Rather than investing in facilities and capex equipment, providers can shift to a “pay as you go” model, foregoing the need for a lot of fixed infrastructure and, like phone service, pay for the minutes (or data) used;
- user experience, for both patient and provider, must continue to improve, ultimately to where the technology disappears into the background, and the capabilities are intuitive and seamless and the process compelling with equivalent or better outcomes and cost structures.
Ultimately, we want the technology to support the provision of care, not get in the way of it. If we are successful (and we believe we are and will continue to be), then the technology truly will allow a bridge to tomorrow’s better model of remote care delivery, making the best possible case for the normalization of remote care as standard of care delivery.
Thank you for the fantastic interview, I enjoyed learning more about Intel’s health efforts. Reader’s who wish to learn more should visit Intel’s Global Health & Life Sciences business.
AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes
Artificial intelligence is playing a larger role in the science of genomics every day. Recently, a team of researchers from UC San Diego utilized AI to discover a DNA code that could pave the way for controlling gene activation. In addition, researchers from Australia’s national science organization, CSIRO, employed AI algorithms to analyze over one trillion genetic data points, advancing our understanding of the human genome and through localization of specific disease-causing genes.
The human genome, and all DNA, comprises four different chemical bases: adenine, guanine, thymine, and cytosine, abbreviated as A, G, T, and C respectively. These four bases are joined together in various combinations that code for different genes. Around one-quarter of all human genes are coded by genetic sequences that are roughly TATAAA, with slight variations. These TATAAA derivatives comprise the “TATA Box”, non-coding DNA sequences that play a role in the initialization of transcription for genes comprised of TATA.. It’s unknown how the other approximately 75% of the human genome is activated, however, thanks to the overwhelming number of possible base sequence combinations.
As reported by ScienceDaily, researchers from UCSD have managed to identify a DNA activation code that is employed as often as the TATA box activations, thanks to their use of artificial intelligence. The researchers refer to the DNA activation code as the “downstream core promoter region” (DPR). According to the senior author of the paper detailing the findings, UCSD Biological Sciences professor James Kagonaga, the discovery of the DPR reveals how somewhere between one quarter to one-third of our genes are activated.
Kadonaga initially discovered a gene activation sequence corresponding to portions of DPR when working with fruit flies in 1996. Since that time, Kadonaga and colleagues have been working on determining which DNA sequences were correlated with DPR activity. The research team began by creating half a million different DNA sequences and determining which sequences displayed DPR activity. Around 200,000 DNA sequences were used to train an AI model that could predict whether or not DPR activity would be witnessed within chunks of human DNA. The model was reportedly highly accurate. Kadonaga described the model’s performance as “absurdly good” and its predictive power “incredible”. The process used to create the model proved so reliable that the researchers ended up creating a similar AI focused on discovering new TATA box occurrences.
In the future, artificial intelligence could be leveraged to analyze DNA sequence patterns and give researchers more insight into how gene activation happens in human cells. Kadonaga believes that, much like how AI was able to help his team of researchers identify the DPR, AI will also assist other scientists in discovering important DNA sequences and structures.
In another use of AI to explore the human genome, as MedicalExpress reports, researchers from Australia’s CSIRO national science agency have used an AI platformed called VariantSpark in order to analyze over 1 trillion points of genomic data. It’s hoped that the AI-based research will help scientists determine the location of certain disease-related genes.
Traditional methods of analyzing genetic traits can take years to complete, but as CSIRO Bioinformatics leader Dr. Denis Bauser explained, AI has the potential to dramatically accelerate this process. VarianSpark is an AI platform that can analyze traits such as susceptibility to certain diseases and determine which genes may influence them. Bauer and other researchers made use of VariantSpark to analyze a synthetic dataset of around 100,000 individuals in just 15 hours. VariantSpark analyzed over ten million variants of one trillion genomic data points, a task that would take even the fastest competitors using traditional methods thousands of years to complete.
As Dr. David Hansin, CEO of CSIRO Australian E-Health Research Center explained via MedicalExpress:
“Despite recent technology breakthroughs with whole-genome sequencing studies, the molecular and genetic origins of complex diseases are still poorly understood which makes prediction, application of appropriate preventive measures and personalized treatment difficult.”
Bauer believes that VariantSpark can be scaled up to population-level datasets and help determine the role genes play in the development cardiovascular disease and neuron diseases. Such work could lead to early intervention, personalized treatments, and better health outcomes generally.
Research Shows How AI Can Help Reduce Opioid Use After Surgery
Research coming out of the University of Pennsylvania School of Medicine last month demonstrated how artificial intelligence (AI) can be utilized to fight against opioid abuse. It focused on a chatbot which sent reminders to patients who underwent surgery to fix major bone fractures.
The research was published in the Journal of Medical Internet Research.
Christopher Anthony, MD, is the study’s lead author and the associate director of Hip Preservation at Penn Medicine. He is also an assistant professor of Orthopaedic Surgery.
“We showed that opioid medication utilization could be decreased by more than a third in an at-risk patient population by delivering psychotherapy via a chatbot,” he said. “While it must be tested with future investigations, we believe our findings are likely transferable to other patient populations.”
Opioid Use After Surgery
Opioids are an effective treatment for pain following a severe injury, such as a broken arm or leg, but the large prescription of the drugs can lead to addiction and dependence for many users. This is what has caused the major opioid epidemic throughout the United States.
The team of researchers believe that a patient-centered approach with the use of the AI chatbot can help reduce the number of opioids taken after such surgerys, which can be a tool used against the epidemic.
Those researchers also included Edward Octavio Rojas, MD, who is a resident in Orthopaedic Surgery at the University of Iowa Hospitals & Clinics. The co-authors included: Valerie Keffala, PhD; Natalie Ann Glass, PhD; Benjamin J. Miller, MD; Mathew Hogue, MD; Michael Wiley, MD; Matthew Karam, MD; John Lawrence Marsh, MD, and Apurva Shah, MD.
The research involved 76 patients who visited a Level 1 Trauma Center at the University of Iowa Hospitals & Clinics. They were there to receive treatment for fractures that required surgery, and those patients were separated into two groups. Both groups received the same prescription for opioids to treat pain, but only one of the groups received daily text messages from the automated chatbot.
The group that received text messages could expect two per day for a period of two weeks following their procedure. The automated chatbot relied on artificial intelligence to send the messages, which went out the day after surgery. The text messages were constructed in a way to help patients focus on coping better with the medication.
The text messages, which were created by a pain psychologist specialized in pain and commitment therapy (ACT), did not directly go against the use of the medication, but they attempted to help the patients think of something other than taking a pill.
Six Core Principles
The text messages could be broken down into six “core principles,” : Values, Acceptance, Present Moment Awareness, Self-As-Context, Committed Action, and Diffusion.
One message under the Acceptance principle was: “feelings of pain and feelings about your experience of pain are normal after surgery. Acknowledge and accept these feelings as part of the recovery process. Remember how you feel now is temporary and your healing process will continue. Call to mind pleasant feelings or thoughts you experienced today.”
The results showed that the patients who did not receive the automated messages took, on average, 41 opioid pills following the surgeries, while the group who did receive the messages averaged 26. The 37 percent difference was impressive, and those who received messages also reported less overall pain two weeks after the surgery.
The automated messages were not personalized for each individual, which demonstrates success without over-personalization.
“A realistic goal for this type of work is to decrease opioid utilization to as few tablets as possible, with the ultimate goal to eliminate the need for opioid medication in the setting of fracture care,” Anthony said.
The study received funding by a grant from the Orthopaedic Trauma Association.
Samsung Medison & Intel Collaborate to Improve Fetal Safety
According to the World Health Organization, approximately 295,000 women died during and following pregnancy and childbirth in 2017, even as maternal mortality rates have been decreasing. While every pregnancy and birth is unique, most maternal deaths are preventable. Research from the Perinatal Institute found that tracking fetal growth is essential for good prenatal care and can help prevent stillbirths when physicians are able to recognize growth restrictions.
Samsung Medison and Intel are collaborating on new smart workflow solutions to improve obstetric measurements that contribute to maternal and fetal safety and can help save lives. Using an Intel® Core™ i3 processor, the Intel® Distribution of OpenVINO™ toolkit and OpenCV toolkit, Samsung Medison’s BiometryAssist™ automates and simplifies fetal measurements, while LaborAssist™ automatically estimates the fetal angle of progression (AoP) during labor for a complete understanding of a patient’s birthing progress, without the need for invasive digital vaginal exams.
According to Professor Jayoung Kwon, MD PhD, Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, Yonsei University College of Medicine, Yonsei University Health System in Seoul, Korea: “Samsung Medison’s BiometryAssist is a semi-automated fetal biometry measurement system that automatically locates the region of interest and places a caliper for fetal biometry, demonstrating a success rate of 97% to 99% for each parameter.Such high efficacy enables its use in the current clinical practice with high precision.”
“At Intel, we are focused on creating and enabling world-changing technology that enriches the lives of every person on Earth,” said Claire Celeste Carnes, strategic marketing director, Health and Life Sciences, Intel. “We are working with companies like Samsung Medison to adopt the latest technologies in ways that enhance the patient safety and improve clinical workflows, in this case for the important and time-sensitive care provided during pregnancy and delivery.”
How It Works
BiometryAssist automates and standardizes fetal measurements in approximately 85 milliseconds with a single click, providing over 97% accuracy. This enables doctors to allocate more time to talking with their patients while also standardizing fetal measurements, which have historically proved challenging to provide with accuracy. With BiometryAssist, physicians can quickly verify consistent measurements for high volumes of patients.
“Samsung is working to improve the efficiency of new diagnostic features, as well as healthcare services, and the Intel Distribution of OpenVINO toolkit and OpenCV toolkit have been a great ally in reaching these goals,” said Won-Chul Bang, corporate vice president and head of Product Strategy, Samsung Medison.
During labor, LaborAssist helps physicians estimate fetal AOP and head direction. This enables both the physician and patient to understand the fetal descent and labor process and determine the best method for delivery. There is always risk with delivery and a slowing progress could result in issues for the baby. Obtaining more accurate and real-time progression of labor can help physicians determine the best mode of delivery and potentially help reduce the number of unnecessary cesarean sections.
“LaborAssist provides automatic measurement of the angle of progression as well as information pertaining to fetal head direction and estimated head station. So it is useful for explaining to the patient and her family how the labor is progressing, using ultrasound images which show the change of head station during labor. It is expected to be of great assistance in the assessment of labor progression and decision-making for delivery,” said Professor Min Jeong Oh, MD, PhD, Department of Obstetrics and Gynecology, Korea University Guro Hospital in Seoul, Korea.
BiometryAssist and LaborAssist are already in use in 80 countries, including the United States, Korea, Italy, France, Brazil and Russia. The solutions received Class 2 clearance by the FDA in 2020.
Intel and Samsung Medison will continue to collaborate to advance the state of the art in ultrasounds by accelerating AI and leveraging advanced technology in Samsung Medison’s next-generation ultrasound solutions, including Nerve Tracking, SW Beamforming and AI Module.
- Dimitris Vassos, CEO, Co-founder, and Chief Architect of Omilia – Interview Series
- Human Brain’s Light Processing Ability Could Lead to Better Robotic Sensing
- Game Developers Look To Voice AI For New Creative Opportunities
- Udacity Launches RPA Developer Nanodegree Program in Conjunction with UiPath
- AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes