Connect with us

Interviews

Johnson Thomas, MD, AIBx – Interview Series

mm

Published

 on

Johnson Thomas is a physician with a special interest in artificial intelligence. In addition to practicing medicine, he also enjoys programming.

His recent research project AIBx uses AI to classify thyroid nodules. This is also an explainable AI, which increases physician’s trust in the predictions, Research was published in Thyroid journal.

You’re both a practicing physician at Mercy Clinic Endocrinology in Springfield, MO, and you are a machine learning specialist. How did you find your love of both the medical world and programming eventually intersect?

I was always fascinated with computers. Growing up in India, I was fortunate to go to a school with computers. We had an active computer club and they taught us GWBasic. This was a residential school and students could use computers during their free time in the evening or during lunch breaks. So, me and my friend used to go to the lab and write codes. We were mainly creating small arcade like games.

When I graduated from the 12th grade, it was hard to decide between computer engineering and medicine. My father and grandfather were doctors. My father was excited about his work and he loved helping people. He was also very happy with his work. So finally, I decided to go to med school. Even there, I used to code a little bit. Over the years I finished med school, internal medicine residency and later completed my specialization in endocrinology. After I started working, I had a bit more time to explore coding. Around this time AI and machine learning was becoming more popular. So, I started taking online courses and then started doing small projects using publicly available medical data sets.

 

Your most recent research project AIBx uses AI to classify thyroid nodules, into being either positive or negative for cancer. What was it that inspired you to work on this project?

One of the areas I specialize inside endocrinology is thyroid nodules and thyroid cancers. We were doing hundreds of biopsies a year but only a few of them were cancers. This does not seem like an efficient way to use our resources. This was back in 2015. At that time, I could only work with numerical data. So, I collected ultrasound features of thyroid nodules into an excel sheet and used it to create a machine learning model using XGboost. This was deployed as a website at www.TUMScore.com.

We presented this research in Canada during the annual American Thyroid Association meeting in 2017. But this was still very subjective. Just like beauty lies in beholder’s eye, ultrasound features depends on who is reading it. There is lot of intra and interobserver variations. So, I started exploring options that are more objective. This led to image classification. But the problem with image classification is that it is mostly not explainable. How can a physician trust the algorithm? So, we decided to emulate a physician’s thought process.

Most physicians have an idea of how a cancerous thyroid nodule should look. They mentally compare a new ultrasound image to this mental picture. Based on this we decided to create an image similarity algorithm. So, when a physician uploads an image to AIBx it pulls out similar images from our database along with the actual diagnosis of those nodules. The operating physician can look at these images and accept or reject the output from AIBx. This process increases the physician’s trust in the algorithm.

 

How big of a dataset was originally used when you launched this project?

Thyroid ultrasound images are grayscale and they have only few patterns. Since we were using image similarity model, we didn’t need a large dataset. We had 2025 images in our database representing most of the common thyroid cancer varieties. They came from different ultrasound machines.

 

When it comes to deep learning big data is important. Have you seen an improvement in the diagnosis rates over time as more thyroid ultrasound images are entered into the database?

Adding more data and using different preprocessing techniques to increase the available data has helped us to improve our algorithm. Initially we used images with square aspect ratio but later we added images without square aspect ratio and that has improved our outcomes.

 

Are the images exclusively from your clinic, or are you having other clinics provide you with additional ultrasound images?

The present model has images from Mercy Springfield endocrinology clinic and Mercy hospital. Physicians from other health systems and countries have reached out to us to conduct a validation study using their data. We are very excited about this opportunity.

 

How accurate is the AI compared to a trained MD?

We compared results of AIBx to established metrics of current classification systems. In real world practice, there is wide variability in results. Positive predictive value (probability that subjects with a positive test truly have the disease) can be as low as 2% using current classification systems. What this means is that if the system predicts that 100 nodules has cancer, in reality only 2 among those 100 nodules will actually have cancer. AIBx has a positive predictive value of 65.9 percent and negative predictive value of 93.2%

 

How many unnecessary biopsies could we potentially reduce with this type of AI?

Based on our research, using AIBx we could have avoided more than half (57.3%) of the biopsies. But this needs to be validated using images outside our health system.

 

How can hospitals, MDs, or other interested parties assist with these projects?

We welcome collaboration from other MDs and hospital system. They can contact us through our website www.ThyroidBx.com or email us at contact@ThyroidBx.com

 

How long do you believe it will be until machine learning replaces MDs for the diagnosis of most cancers?

Media portrays AI and physicians as competing entities. Both have their strengths and weaknesses. Symbiosis of physicians and AI, enhancing our capabilities to serve our patients is better than the current system. Because of the complementary nature of their roles, I do not think that AI will replace MDs in the near future.

 

What has you most excited for AI when it comes to healthcare?

I hope that AI will free the physicians from the data entry job to do what we were called for, to listen empathize and cure when we can.

We are excited to give this project additional exposure. For anyone who wishes to learn more please visit Thyroid BX.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.

Healthcare

AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes

mm

Published

 on

Artificial intelligence is playing a larger role in the science of genomics every day. Recently, a team of researchers from UC San Diego utilized AI to discover a DNA code that could pave the way for controlling gene activation. In addition, researchers from Australia’s national science organization, CSIRO, employed AI algorithms to analyze over one trillion genetic data points, advancing our understanding of the human genome and through localization of specific disease-causing genes.

The human genome, and all DNA, comprises four different chemical bases: adenine, guanine, thymine, and cytosine, abbreviated as A, G, T, and C respectively. These four bases are joined together in various combinations that code for different genes. Around one-quarter of all human genes are coded by genetic sequences that are roughly TATAAA, with slight variations. These TATAAA derivatives comprise the “TATA Box”, non-coding DNA sequences that play a role in the initialization of transcription for genes comprised of TATA.. It’s unknown how the other approximately 75% of the human genome is activated, however, thanks to the overwhelming number of possible base sequence combinations.

As reported by ScienceDaily, researchers from UCSD have managed to identify a DNA activation code that is employed as often as the TATA box activations, thanks to their use of artificial intelligence. The researchers refer to the DNA activation code as the “downstream core promoter region” (DPR).  According to the senior author of the paper detailing the findings, UCSD Biological Sciences professor James Kagonaga, the discovery of the DPR reveals how somewhere between one quarter to one-third of our genes are activated.

Kadonaga initially discovered a gene activation sequence corresponding to portions of DPR when working with fruit flies in 1996. Since that time, Kadonaga and colleagues have been working on determining which DNA sequences were correlated with DPR activity. The research team began by creating half a million different DNA sequences and determining which sequences displayed DPR activity. Around 200,000 DNA sequences were used to train an AI model that could predict whether or not DPR activity would be witnessed within chunks of human DNA. The model was reportedly highly accurate. Kadonaga described the model’s performance as “absurdly good” and its predictive power “incredible”.  The process used to create the model proved so reliable that the researchers ended up creating a similar AI focused on discovering new TATA box occurrences.

In the future, artificial intelligence could be leveraged to analyze DNA sequence patterns and give researchers more insight into how gene activation happens in human cells. Kadonaga believes that, much like how AI was able to help his team of researchers identify the DPR, AI will also assist other scientists in discovering important DNA sequences and structures.

In another use of AI to explore the human genome, as MedicalExpress reports, researchers from Australia’s CSIRO national science agency have used an AI platformed called VariantSpark in order to analyze over 1 trillion points of genomic data. It’s hoped that the AI-based research will help scientists determine the location of certain disease-related genes.

Traditional methods of analyzing genetic traits can take years to complete, but as CSIRO Bioinformatics leader Dr. Denis Bauser explained, AI has the potential to dramatically accelerate this process. VarianSpark is an AI platform that can analyze traits such as susceptibility to certain diseases and determine which genes may influence them. Bauer and other researchers made use of VariantSpark to analyze a synthetic dataset of around 100,000 individuals in just 15 hours. VariantSpark analyzed over ten million variants of one trillion genomic data points, a task that would take even the fastest competitors using traditional methods thousands of years to complete.

As Dr. David Hansin, CEO of CSIRO Australian E-Health Research Center explained via MedicalExpress:

“Despite recent technology breakthroughs with whole-genome sequencing studies, the molecular and genetic origins of complex diseases are still poorly understood which makes prediction, application of appropriate preventive measures and personalized treatment difficult.”

Bauer believes that VariantSpark can be scaled up to population-level datasets and help determine the role genes play in the development cardiovascular disease and neuron diseases. Such work could lead to early intervention, personalized treatments, and better health outcomes generally.

Spread the love
Continue Reading

Healthcare

Research Shows How AI Can Help Reduce Opioid Use After Surgery

Published

 on

Research coming out of the University of Pennsylvania School of Medicine last month demonstrated how artificial intelligence (AI) can be utilized to fight against opioid abuse. It focused on a chatbot which sent reminders to patients who underwent surgery to fix major bone fractures. 

The research was published in the Journal of Medical Internet Research

Christopher Anthony, MD, is the study’s lead author and the associate director of Hip Preservation at Penn Medicine. He is also an assistant professor of Orthopaedic Surgery. 

“We showed that opioid medication utilization could be decreased by more than a third in an at-risk patient population by delivering psychotherapy via a chatbot,” he said. “While it must be tested with future investigations, we believe our findings are likely transferable to other patient populations.”

Opioid Use After Surgery

Opioids are an effective treatment for pain following a severe injury, such as a broken arm or leg, but the large prescription of the drugs can lead to addiction and dependence for many users. This is what has caused the major opioid epidemic throughout the United States. 

The team of researchers believe that a patient-centered approach with the use of the AI chatbot can help reduce the number of opioids taken after such surgerys, which can be a tool used against the epidemic. 

Those researchers also included Edward Octavio Rojas, MD, who is a resident in Orthopaedic Surgery at the University of Iowa Hospitals & Clinics. The co-authors included: Valerie Keffala, PhD; Natalie Ann Glass, PhD; Benjamin J. Miller, MD; Mathew Hogue, MD; Michael Wiley, MD; Matthew Karam, MD; John Lawrence Marsh, MD, and Apurva Shah, MD. 

The Experiment

The research involved 76 patients who visited a Level 1 Trauma Center at the University of Iowa Hospitals & Clinics. They were there to receive treatment for fractures that required surgery, and those patients were separated into two groups. Both groups received the same prescription for opioids to treat pain, but only one of the groups received daily text messages from the automated chatbot. 

The group that received text messages could expect two per day for a period of two weeks following their procedure. The automated chatbot relied on artificial intelligence to send the messages, which went out the day after surgery. The text messages were constructed in a way to help patients focus on coping better with the medication. 

The text messages, which were created by a pain psychologist specialized in pain and commitment therapy (ACT), did not directly go against the use of the medication, but they attempted to help the patients think of something other than taking a pill.

Six Core Principles

The text messages could be broken down into six “core principles,” : Values, Acceptance, Present Moment Awareness, Self-As-Context, Committed Action, and Diffusion.

One message under the Acceptance principle was: “feelings of pain and feelings about your experience of pain are normal after surgery. Acknowledge and accept these feelings as part of the recovery process. Remember how you feel now is temporary and your healing process will continue. Call to mind pleasant feelings or thoughts you experienced today.” 

The results showed that the patients who did not receive the automated messages took, on average, 41 opioid pills following the surgeries, while the group who did receive the messages averaged 26. The 37 percent difference was impressive, and those who received messages also reported less overall pain two weeks after the surgery. 

The automated messages were not personalized for each individual, which demonstrates success without over-personalization.

“A realistic goal for this type of work is to decrease opioid utilization to as few tablets as possible, with the ultimate goal to eliminate the need for opioid medication in the setting of fracture care,” Anthony said. 

The study received funding by a grant from the Orthopaedic Trauma Association. 

Spread the love
Continue Reading

Healthcare

Samsung Medison & Intel Collaborate to Improve Fetal Safety

mm

Published

 on

According to the World Health Organization, approximately 295,000 women died during and following pregnancy and childbirth in 2017, even as maternal mortality rates have been decreasing. While every pregnancy and birth is unique, most maternal deaths are preventable. Research from the Perinatal Institute found that tracking fetal growth is essential for good prenatal care and can help prevent stillbirths when physicians are able to recognize growth restrictions.

Samsung Medison and Intel are collaborating on new smart workflow solutions to improve obstetric measurements that contribute to maternal and fetal safety and can help save lives. Using an Intel® Core™ i3 processor, the Intel® Distribution of OpenVINO™ toolkit and OpenCV toolkit, Samsung Medison’s BiometryAssist™ automates and simplifies fetal measurements, while LaborAssist™ automatically estimates the fetal angle of progression (AoP) during labor for a complete understanding of a patient’s birthing progress, without the need for invasive digital vaginal exams.

According to Professor Jayoung Kwon, MD PhD, Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, Yonsei University College of Medicine, Yonsei University Health System in Seoul, Korea: “Samsung Medison’s BiometryAssist is a semi-automated fetal biometry measurement system that automatically locates the region of interest and places a caliper for fetal biometry, demonstrating a success rate of 97% to 99% for each parameter.Such high efficacy enables its use in the current clinical practice with high precision.”

“At Intel, we are focused on creating and enabling world-changing technology that enriches the lives of every person on Earth,” said Claire Celeste Carnes, strategic marketing director, Health and Life Sciences, Intel. “We are working with companies like Samsung Medison to adopt the latest technologies in ways that enhance the patient safety and improve clinical workflows, in this case for the important and time-sensitive care provided during pregnancy and delivery.”

How It Works

BiometryAssist automates and standardizes fetal measurements in approximately 85 milliseconds with a single click, providing over 97% accuracy. This enables doctors to allocate more time to talking with their patients while also standardizing fetal measurements, which have historically proved challenging to provide with accuracy. With BiometryAssist, physicians can quickly verify consistent measurements for high volumes of patients.

“Samsung is working to improve the efficiency of new diagnostic features, as well as healthcare services, and the Intel Distribution of OpenVINO toolkit and OpenCV toolkit have been a great ally in reaching these goals,” said Won-Chul Bang, corporate vice president and head of Product Strategy, Samsung Medison.

During labor, LaborAssist helps physicians estimate fetal AOP and head direction. This enables both the physician and patient to understand the fetal descent and labor process and determine the best method for delivery. There is always risk with delivery and a slowing progress could result in issues for the baby. Obtaining more accurate and real-time progression of labor can help physicians determine the best mode of delivery and potentially help reduce the number of unnecessary cesarean sections.

“LaborAssist provides automatic measurement of the angle of progression as well as information pertaining to fetal head direction and estimated head station. So it is useful for explaining to the patient and her family how the labor is progressing, using ultrasound images which show the change of head station during labor. It is expected to be of great assistance in the assessment of labor progression and decision-making for delivery,” said Professor Min Jeong Oh, MD, PhD, Department of Obstetrics and Gynecology, Korea University Guro Hospital in Seoul, Korea.

BiometryAssist and LaborAssist are already in use in 80 countries, including the United States, Korea, Italy, France, Brazil and Russia. The solutions received Class 2 clearance by the FDA in 2020.

What’s Next

 Intel and Samsung Medison will continue to collaborate to advance the state of the art in ultrasounds by accelerating AI and leveraging advanced technology in Samsung Medison’s next-generation ultrasound solutions, including Nerve Tracking, SW Beamforming and AI Module.

 

 

Spread the love
Continue Reading