Connect with us

Healthcare

Artificial Intelligence Could Bring an End to Finger-Prick Glucose Tests

Published

 on

Artificial Intelligence Could Bring an End to Finger-Prick Glucose Tests

Artificial intelligence was used to develop new technology capable of detecting low glucose levels via ECG using a non-invasive sensor. It is able to detect hypoglycaemic events from raw ECG signals. The technology was developed by researchers from the University of Warwick, including Dr. Leandro Pecchia.

Continuous Glucose Monitors (CGM) are currently used, and they are available for hypoglycaemic detection. They are able to measure glucose in interstitial fluid through the use of an invasive senor with a small needle. This then sends alarms and data to a display device. Often times, they need to be calibrated twice a day with invasive finger-prick blood glucose level tests. 

Dr. Leandro Pecchia’s team at the University of Warwick published their results on January 13th in a paper titled “Precision Medicine and Artificial Intelligence: A Pilot Study on Deep Learning for Hypoglycemic Events Detection based on ECG.” It was published in the Nature Springer journal Scientific Reports.

The paper proved that the latest developments in artificial intelligence (deep learning) can be used to detect hypoglycaemic events from raw ECG signals that are acquired through non-invasive wearable sensors.

There were two pilot studies conducted with healthy volunteers, and they found that the average sensitivity and specificity hypoglycaemic detection is comparable with current CGM performance, but it is non-invasive.

Dr. Leandro Pecchia is from the School of Engineering at the University of Warwick.

“Fingerpicks are never pleasant and in some circumstances are particularly cumbersome. Taking fingerpick during the night certainly is unpleasant, especially for patients in paediatric age.

“Our innovation consisted in using artificial intelligence for automatic detecting hypoglycaemia via few ECG beats. This is relevant because ECG can be detected in any circumstance, including sleeping.”

The model used by the researchers is called the Warwick model, and it highlights how the ECG changes in each subject during a hypoglycemic event. The AI model was trained by the researchers with each subject’s own data. Since there are so many intersubjective differences, using cohort data to train the system would not give the same results. A more effective approach would be personalized therapy based on the new system.

It is likely that the Warwick scientists’ method was so effective because the AI algorithms are trained with the subject’s own data. 

“The performance of AI algorithms trained over cohort ECG-data would be hindered by these inter-subject differences,” says Pecchia.

“Our approach enables personalized tuning of detection algorithms and emphasize how hypoglycemic events affect ECG in individuals. Basing on this information, clinicians can adapt the therapy to each individual. Clearly more clinical research is required to confirm these results in wider populations. This is why we are looking for partners.”

Right Around the Corner

Artificial intelligence within the medical field is one of the major potential uses for this technology. The current applications are already extremely impressive, and they will continue to advance. This new technology can solve one of the most uncomfortable daily aspects of diabetics, and it very well can bring an end to the finger-prick tests required. 

Often times, the focus is on major medical advancements that can take place because of artificial intelligence, such as curing diseases and performing extremely precise surgical operations. This is all true, and it will undoubtedly bring major advancements to the medical field, which will then do the same to society. There will come a time when robots are performing most surgical procedures, developing pharmaceuticals and cures, and almost everything else imaginable. While this is not far away, nobody knows the exact time it will take to reach that point. However, with the type of technology developed by the researchers at the University of Warwick, or other advancements in robotics such as prosthetics and artificial skin, artificial intelligence will soon change the daily lives of people living with these medical conditions. We don’t have to wait for the future to see the major medical advancements, technology that will drastically change hundreds of millions of lives is right around the corner. 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Healthcare

AI Could Make Up For Lack Of Radiologists In Fight Against Breast Cancer, But It Isn’t Ready Yet

mm

Published

on

AI Could Make Up For Lack Of Radiologists In Fight Against Breast Cancer, But It Isn't Ready Yet

Recently a team of researchers from Imperial College London and Google Health created a computer vision model intended to diagnose cases of breast cancer from X-rays. As CNN reports, the model was reportedly trained on X-rays of over 29,000 women, and when pitted against six radiologists the model managed to outperform the assessments of the doctors.

Currently, the NHS uses the combined decisions of two doctors in order to diagnose breast cancer from X-rays. If the two doctors end up disagreeing, a third will be brought in to consult on the images. While the doctors had access to the medical records of the patients, the AI device only had the mammograms to base its decisions on. Despite this limitation, the AI model proved at least as good at diagnosing breast cancer as two doctors. In fact, the model performed better than a single doctor at detecting breast cancer. When comparing the false-positive rates between the AI and the doctors, the AI made a slight reduction in false positives, being about 1.2% more accurate in general. According to the results of the research report, the AI also reduce false-negative rates (where a genuinely positive case of cancer is missed) by about 2.7%.

One of the authors of the paper, Director of the Cancer Research UK Imperial Center, Ara Darzi, explained that the research team hadn’t expected that their system would deliver such high-quality results. However, Darzi is excited by the possibility of improving productivity and accuracy when it comes to cancer screening.

Breast cancer is the second leading form of cancer death in women, but outcomes can be dramatically improved if the disease is diagnosed early. The issue is that, as stated by the American Cancer Society, currently even large-scale screening programs miss about one-in-five cases.

For this reason, the research team is hopeful that their system can be improved upon and that it can go on to outperform even the best clinicians. The research team also stated that their algorithm could potentially address a shortage in radiologists. One report conducted by the Royal College of Radiologists found that in the UK, there will be a nearly 2,000 radiologist shortage by 2023 unless something is done to remedy the situation.

However, Darzi admits that at this stage the system isn’t ready to start replacing humans and be used as a secondary interpreter. AI tools in the healthcare field often fail to deliver on their initial promises, thanks to complex factors that can’t adequately be simulated in training. One big limitation of the study is that the images all came from a single mammography system and the images lacked diversity. According to QZ, the research team didn’t have access to details that could be used to ascertain the diversity of the images in the dataset, and therefore it isn’t possible to know if the system was still highly accurate when asked to examine x-rays of minorities. There are racial disparities in both the UK and the US when it comes to the diagnosis of breast cancer, with black women being less likely to get cancer screenings in the UK.

Google plans to spend time remedying the disparity in the data that the model is trained on before making it available to healthcare partners, aiming to make a larger, more inclusive dataset. The system will also have to be tested in clinical trials before it is made available for use in clinical settings. The algorithms developed by the research team have the power to genuinely improves healthcare outcomes and save lives, but only if they are carefully and rigorously tested.

 

Spread the love
Continue Reading

Healthcare

Artificial Intelligence In Healthcare Could Bring Risks Along With Opportunities

mm

Published

on

Artificial Intelligence In Healthcare Could Bring Risks Along With Opportunities

AI has enormous potential when it comes to the healthcare field, capable of improving diagnoses and finding new, more effective drugs. However, as a piece in Scientific American recently discussed, the speed with which AI is penetrating the healthcare field also opens up many new challenges and risks.

Over the course of the past five years, the US Food and Drug Administration has approved over 40 different AI products. However, as reported by Scientific American, none of the products cleared for sale in the US have had their performance evaluated in randomized controlled clinical trials. Many AI medical tools don’t even require approval by the FDA.

Evan Topol, the author of “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, stated to Scientific American that many of the AI products which claim to be effective at tasks like diagnosing diseases have actually been rigorously tested in such a fashion, with the first major randomized trial of an AI detection and diagnosis toll being done this past October. Furthermore, very few tech startups publish their research papers in peer-review journals, which is where their work will be analyzed by scientists.

When properly tested and controlled, AI systems can be powerful tools that can help medical professionals detect otherwise unnoticed symptoms, improving health outcomes.

As an example, an AI tool for detecting diabetic eye disease was tested across hundreds of patients and seemed to prove reliable. The company responsible for the test worked alongside the FDA for over eight years in order to refine the product. The test, IDx-DR, is making its way to primary care clinics where it could potentially help detect early signs of diabetic retinopathy, referring patients to eye specialists if suspect symptoms are found.

If not tested carefully, AI systems that medical professionals may use to guide their diagnosis and treatment have the potential to create harm instead of avoiding it.

The Scientific American article details one potential problem with relying on AI to diagnose ailments, pointing to the example of an AI intended to analyze chest X-rays and detect which patients might develop pneumonia. While the system proved accurate when tested at the Mount Sinai Hospital in New York, it failed when tested on images taken at other hospitals. The researchers found that the AI was distinguishing between images created by portable X-ray systems vs. those created in a radiology department. Doctors use portable chest X-ray systems on patients who are often too sick to leave their beds, and these patients are at greater risk of developing pneumonia.

False alarms are also a concern. DeepMind created an AI mobile app that is capable of predicting acute kidney failure in hospitalized patients up to 48 hours in advance. However, the system reportedly also made two false alarms for every kidney failure that was successfully predicted. False positives can be harmful as they can encourage doctors to spend unnecessary time and resources ordering further tests or altering prescribed treatments.

In another incident, one AI system incorrectly concluded that patients who had pneumonia were more likely to survive if they had asthma, which could cause doctors to alter treatments for patients with asthma.

AI systems that are developed for one hospital often underperform when they are used in a different hospital. There are multiple causes for this. For one, AI systems are often trained on electronic health records, but many electronic health records are often incomplete or incorrect as their primary purpose is often billing and not patient care. For instance, one investigation carried out by KHN found that on occasion there were life-threatening errors in patients’ medical records, like medication lists containing improper meds. Beyond that, diseases are often just more complicated, and the healthcare system more complex, than can often be anticipated by AI engineers and scientists.

As AI becomes ever more prolific, it will be important for AI developers to work alongside health authorities to ensure that their AI systems are thoroughly tested and for regulatory bodies to ensure that standards are set and followed for the reliability of AI diagnostic tools.

Spread the love
Continue Reading

Healthcare

Paper Examines How To Reduce Risk Of Using AI in Medicine

mm

Published

on

Paper Examines How To Reduce Risk Of Using AI in Medicine

Artificial intelligence programs are capable of improving healthcare in a variety of different ways. For instance, AI applications can use computer vision to help doctors diagnose conditions from X-rays and FMRIs. Machine learning algorithms can also be used to help reduce false-positive rates by extracting subtle patterns from data that humans may not be able to find in medical data. However, with the possibilities comes new challenges, and recently a new article was published in Science that examined possible risks and regulatory strategies for medical machine learning techniques in an effort to minimize any possible negative side effects of employing AI in a medical context.

Expanding Applications For AI In Healthcare

AI is seeing its applications in the medical field expand rapidly.  Recent developments in the field of healthcare, driven by AI, include the creation of a new pharmaceutical company that aims to use AI to create new drugs, the creation of AI-drive remote health sensors, and computer vision apps that analyze CT scans and X-rays.

To be more precise, Genesis Therapeutics is a startup that is aiming to use AI to speed up the process of drug-discovery, hoping to create drugs that can reduce the severity of debilitating diseases. Genesis Therapeutics is just one of almost 170 different firms using AI to research new drug formulations. Meanwhile, in terms of health monitoring devices, iRhythm and French AI startup Cardiologs are making use of AI algorithms to analyze EEG data and monitor the health of those who have heart conditions are at risk of complications. The software designed by the companies can detect heart murmurs, a condition caused by turbulent blood flow.

Finally, a recent study investigating how computer vision can be applied to medical images found that computer vision systems perform at least as well or better than expert radiologists when examining CT scans to find small hemorrhages. The algorithms used in the study were able to render predictions after examining CT scans for just one second. The computer vision systems were also able to localize the hemorrhage within the brain.

So while the potential benefits of using AI in healthcare are clear, what is less clear is what new challenges and risks will arise as a side-effect of employing AI within the healthcare field.

Regulating An Expanding Field

As TechXplore reported, in order to assess potential drawbacks of using AI in healthcare,  a group of researches recently published a paper in Science, aiming to derive answers to anticipate potential problems with AI and explore potential solutions to these problems. Problems that may arise from using AI in the healthcare field include the inappropriate recommendation of treatments resulting in injury, privacy concerns, and algorithmic bias/inequality.

The FDA has only approved medical AI that uses “locked algorithms”, algorithms that reliably produce the same result every time they are run. However, much of AI’s potential lies in its ability to learn and respond to new types of inputs. In order to enable “adaptive algorithms” to see more use and get approval from the FDA, the authors of the paper took an in-depth look at how the risks related to updating algorithms can be mitigated.

The authors advocate that machine learning engineers and researchers should focus on continuous monitoring of models over the lifetime of their deployment. Among the suggested tools to monitor AI systems was AI itself, which could help give automated reports on how an AI is behaving. It’s also possible that multiple AI devices could monitor each other.

“To manage the risks, regulators should focus particularly on continuous monitoring and risk assessment, and less on planning for future algorithm changes,” said the authors of the paper.

The authors of the paper also recommend that regulators focus on developing new methods of identifying, monitoring, assessing, and managing risks. The paper applies many of the techniques that the FDA has used to regulate other forms of medical tech.

As the paper’s authors explained:

“Our goal is to emphasise the risks that can arise from unanticipated changes in how medical AI/ML systems react or adapt to their environments. Subtle, often unrecognised parametric updates or new types of data can cause large and costly mistakes.”

Spread the love
Continue Reading