Connect with us

Healthcare

Drug Developed With AI Set To Start Clinical Trials

mm

Published

 on

Drug Developed With AI Set To Start Clinical Trials

The AI startup Exscientia created a new drug compound that will soon start undergoing clinical trials in Japan. This is one of just a few instances of AI developed drugs being used in a clinical setting, potentially bringing the world closer to the widespread use of AI in drug development and deployment. The new compound was developed in association with Sumitomo Dainippon Pharma and in contrast, to traditionally develope drugs, the AI-developed compound will be starting clinical trials in just under a year from the inception of the project. Typical drug development takes around four and a half years.

Exscientia developed the drug with the use of an AI platform that utilized various algorithms to generate millions of potential molecule combinations. The AI then filtered through the generated molecules to narrow the field down to the best candidates that should be synthesized and tested.

The clinical trial comes as investments in AI-driven drug development is ramping up. AI has the potential to make drug discovery quicker and cheaper, with the average drug development cost being about 2.6 billion dollars. This means that new treatments for illnesses like heart disease and cancer could be produced more quickly. The drug that is to be tested in known as DSP-1181. Andrew Hopkins, molecular biologist, and chief executive of Exscientia, explained to Financial Times that the researchers only had to test approximately 350 compounds, which was about one-fifth of the normal number of compounds that are typically tested during drug development.

John Bell, the Regius professor of medicine at Oxford University was not involved with the research but explained the impact of the recent development to Financial Times:

“The design and development of molecules through medicinal chemistry has always been a slow and laborious process. Exscientia can do this in many fewer steps, which is really impressive, and it comes from very sound scientific principles.”

Exscientia will be working alongside other pharmaceutical corporations like Sanofi and Bayer in an attempt to find new treatments for diseases. While it has been claimed that the DSP-1181 is the first drug designed with an AI to be used in a clinical trial, ScienceMag reported that many other compounds have already seen human trials, including some drugs that have been tested to treat conditions like Parkinson’s and stroke.

As impressive as the achievements of Exscientia are, there are some problems that lie on the road to AI-enhanced drug development.

While AI can assist in the discovery and development of drugs, there’s no guarantee that the drugs discovered by the AI will be of particular use. It could be that the drugs discovered are extremely similar to molecules that humans have already studied. Hen combined with the fact that effective utilization of a drug depends on scientists understanding the nature of the illness they are trying to treat, AI drug development strategies may not transform the landscape of medicine as radically as some people hope.  Another issue that AI drug companies will have to deal with is the question regulation. The FDA is still attempting to decide the best way to regulate drugs discovered by AI systems, considering how the process differs from traditional drug research while trying to come up with regulatory strategies.

According to Vox, FDA spokesperson Jeremy Khan explained that any drugs developed with the assistance of AI should be held to the same standards as the current drug models, even though there may be differences in how the drug is discovered. Khan explained:

“The full role of AI in drug development is still being elucidated, and stakeholders understand AI in different ways considering the spectrum of tools and techniques covered under this umbrella term. Importantly, the evidentiary standards needed to support drug approvals remain the same regardless of the technological advances involved.”

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

Healthcare

Team Develops Blood-Sampling Robot 

Published

on

Team Develops Blood-Sampling Robot 

A blood-sampling robot, able to perform as well or better than humans, has been developed by a team at Rutgers University. It was tested during the first human clinical trial of an automated blood drawing and testing device. 

Because the device can deliver quicker results, healthcare professionals would not have to spend so much time sampling blood. It would allow them to focus more on the treatment of patients within hospitals and other settings. 

The results were published in the journal Technology, and they were comparable to or exceeded clinical standards. The overall success rate for the 31 participants who had their blood drawn was 87%. 25 people had veins that were easier to access, and that success rate was 97%. 

Within the device is an ultrasound image-guided robot that draws blood from veins. One of the possible developments is a fully integrated device that includes a module to handle samples and a centrifuge-based blood analyzer. This could be used in ambulances, emergency rooms, clinics, doctors’ offices, hospitals, and bedsides. 

The most common clinical procedure, numbered at more than 1.4 billion performed daily in the United States, is Venipuncture. This is a process that involves inserting a needle into a vein to get a blood sample or perform IV therapy. However, previous studies have shown that clinicians fail in 27% of patients without visible veins, 40% of patients without palpable veins and 60% of emaciated patients.

With the repeated failure to start an IV line boost comes an increased risk of phlebitis, thrombosis, and infections. It could also require the targeting of large veins in the body or arteries, and this is riskier and more costly. Because of this, venipuncture is one of the leading causes of injury to patients and clinicians. Other problems associated with difficulty accessing veins are that it can increase procedure time by up to an hour, it requires more staff and the estimated costs are more than $4 billion a year in the United States. 

Josh Leipheimer is a biomedical engineering doctoral student in the Yarmush lab in the biomedical engineering department in the School of Engineering at Rutgers University-New Brunswick.

“A device like ours could help clinicians get blood samples quickly, safely and reliably, preventing unnecessary complications and pain in patients from multiple needle insertion attempts,” Leipheimer said. 

The team hopes that the device can eventually be used in procedures such as IV catheterization, central venous access, dialysis and placing arterial lines. They will now work to refine the device and increase success rates in those patients with difficult veins to access. 

In order to improve its performance, data will be taken form this study and used to enhance artificial intelligence in the robot. 

The Rutgers co-authors include Max L. Balter and Alvin I. Chen, both graduates with doctorates; Enrique J. Pantin at Rutgers Robert Wood Johnson Medical School; Professor Kristen S. Labazzo; and principal investigator Martin L. Yarmush, the Paul and Mary Monroe Endowed Chair and Distinguished Professor in the Department of Biomedical Engineering. The study also had contributions from a researcher at Icahn School of Medicine at Mount Sinai Hospital. 

The newly developed device by the team at Rutgers is just another example of how robotics and artificial intelligence are overtaking the healthcare industry. These devices will greatly assist those working within healthcare, and they will make procedures and other forms of care much more successful.

 

Spread the love
Continue Reading

Healthcare

How AI Is Being Used In The Fight Against The Wuhan Coronavirus

mm

Published

on

How AI Is Being Used In The Fight Against The Wuhan Coronavirus

Artificial intelligence is being leveraged in the fight against the Wuhan Coronavirus. Artificial intelligence as being employed by researchers track the spread of the disease and to research potential treatments for the virus.

The Wuhan Coronavirus manifested in China in December, and in the two months since then it has spread across China and to other parts of the globe. It’s still unknown just how contagious the virus is and how quickly the virus could spread, although there are currently more than 40,000 confirmed cases within China. In order to get a better understanding of how the virus might spread and how fast the virus can spread, researchers are employing machine learning algorithms focused on data pulled from social media sites and other parts of the web.

Over the course of the past week,  the rate of infection seems to have decreased somewhat, but it’s unclear if the disease is falling under control or if new cases are becoming harder to detect.  While other countries around the world have only seen a few cases of coronavirus, in comparison to China, the world health community remains concerned about the virus’s ability to spread. Researchers are trying to get ahead of the viruses’ spread by using machine learning and big data collected from the internet.

As reported by Wired, an international team of researchers have extracted data from various parts of the internet, including posts from doctors and medical groups, public health channels, social media posts, and news reports, compiling a database of text that might relate to the coronavirus.  The researchers then analyze the data for signs that the virus could be spreading outside of China’s borders, making use of machine learning techniques in order to find relevant patterns in the data that could hint at how the virus is behaving.

The researchers sift through social media posts looking for potential symptoms of coronavirus, centering their search on regions where doctors think cases may manifest. The social media posts are processed using natural language processing techniques, techniques which can distinguish between posts where a person mentions their own symptoms versus someone saying symptom-related words in another context (such as discussing news about the coronavirus).

According to Alessandro Vespignani, as Wired reported, Northeastern University professor and expert contagion analyst, argues that even with advanced machine learning techniques it’s often difficult to track the spread of the virus because the characteristics of the virus are still somewhat unknown, and most social media posts are from media companies and currently about the outbreak in China. However, Vesignani believes that if the virus ever did take hold in the US it would become easier to monitor thanks to more posts concerning the virus.

Despite the challenge in gaining relevant information about the potential behavior of the coronavirus, the model created by the researchers does seem to be reasonably effective at finding clues within a large sea of social media posts. The model used by the researchers was able to find evidence of a viral outbreak on December 30th, although it took time to determine just how serious the situation would become. Crowdsourced information could improve the effectiveness of disease tracking models even further, as it enables the more efficient collection of relevant data regarding the virus. As an example, an analysis of data crowdsourced by Chinese physicians suggests that people younger than 15 years of age are more resilient to the virus.

Artificial intelligence can also be combined with data collected from mobile devices to build models that can potentially predict the direction a virus is spreading as well as the rate of a spread. For instance, Researchers from University of Southampton used mobile data to determine the path that the virus may have taken as it moved out of Wuhan in the days following its manifestation. Other researchers analyzed data collected by Tencent, a Chinese mobile app developer, and found that the restrictions imposed by the Chinese government potentially reduce the virus’ spread, buying vital time to develop a plan of attack.

As Fortune reported, the startup Insilico Medicine has made use of artificial intelligence to identify molecules that could potentially treat the coronavirus. Insilico’s AI identified thousands of possible drug molecules over the course of four days. Insilico explained that the 100 most promising candidates will be synthesized and all of their research on molecular structures will be published for other researchers to take advantage of. Medical researchers and companies are fast-tracking the development and testing of treatments, with the US-based biotech company Gilead planning to start the immediate testing of a new antiviral drug within the Wuhan region.

After Insilico decided to begin researching treatments, it focused its research on an enzyme called 3C-like protease. The coronavirus relies on this enzyme to reproduce and spread. According to Insilico, it decided on this specific enzyme because it’s quite similar to other viral proteases whose structures have already been documented, and because Shanghai Tech University had developed a model of the 2019-nCoV 3C-like protease. In the span of four days Insilico was able to generate hundreds of thousands of candidate molecules and choose only the hundred or so that were most likely to be useful. The results of the research were recently published in the repository bioRxiv and on Insilico’s website.

Spread the love
Continue Reading

Healthcare

AI Being Used to Analyze Retinal Images

Published

on

AI Being Used to Analyze Retinal Images

In a newly developed approach, artificial intelligence (AI) is being used to analyze retinal images. The system could be used by doctors in order to select the best treatment for patients suffering from vision loss from diabetic macular edema, a diabetes complication. That problem often leads to vision loss among working-age adults.

One of the first types of therapy that is often used as a line of defense against diabetic macular edema is anti-vascular endothelial growth factor (VEGF). The problem with VEGF agents is that they do not work for everyone. Those who could benefit from the therapy need to be identified first since it requires multiple injections. Those injections cost a lot, and they are burdensome for both patients and physicians.

The leader of the research team is Sina Farsiu from Duke University.

“We developed an algorithm that can be used to automatically analyze optical coherence tomography (OCT) images of the retina to predict whether a patient is likely to respond to anti-VEGF treatments,” she said. “This research represents a step toward precision medicine, in which such predictions help clinicians better select first-line therapies for patients based on specific disease conditions.”

The work was published in The Optical Society (OSA) journal Biomedical Optics Express. In the journal, Farsiu and her team demonstrated how the new algorithm is capable of accurately predicting whether a patient is likely to respond to anti-VEGF therapy, after just one volumetric scan.

“Our approach could potentially be used in eye clinics to prevent unnecessary and costly trial-and-error treatments and thus alleviate a substantial treatment burden for patients,” Farsiu said. “The algorithm could also be adapted to predict therapy response for many other eye diseases, including neovascular age-related macular degeneration.”

The newly developed algorithm is based on a novel convolutional neural network (CNN) architecture. A CNN is a type of artificial intelligence, and it assigns importance to various aspects or objects in order to analyze images. The algorithm was used by the researchers to examine images acquired with OCT, which is a noninvasive technology. OCT produces high-resolution cross-sectional retinal images, and it is considered the standard of care for the assessment and treatment of various eye conditions.

“Unlike previously developed approaches, our algorithm requires OCT images from only a single pretreatment timepoint,” said Reza Rasti, first author of the paper and a postdoctoral scholar in Farsiu’s laboratory. “There’s no need for time-series OCT images, patient records or other metadata to predict therapy response.”

The new algorithm works by highlighting global structures in the OCT. At the same time, it also enhances local features from diseased regions. It searches for CNN-encoded features that can be correlated with anti-VEGF response. 

The algorithm was tested with OCT images from 127 patients who had undergone treatment for diabetic macular edema with three consecutive injections of anti-VEGF agents. The algorithm then analyzed OCT images that were taken prior to the anti-VEGF injections, and the algorithm’s predictions were compared to OCT images taken after anti-VEGF therapy. This told researchers whether or not the therapy resulted in an improvement of the condition. 

The algorithm was found to have an 87 percent accuracy rate for predicting those who would respond to treatment. It had an average precision and specificity of 85 percent and a sensitivity of 80 percent.

The researchers now want to confirm the findings and undertake a larger observational trial of patients who have yet to go through treatment.

 

Spread the love
Continue Reading