In a recent interview with the Michigan Daily, two professors of artificial intelligence – Melvin McInnis and Emily Provost – explained how AI could help people suffering from bipolar disorder.
McInnis is a professor of bipolar disorder and depression and he has researched the conditions for over 30 years. Meanwhile, Provost is an associate professor of computer science and electrical engineering. As reported by the Michigan Daily, the two researchers recently gave a talk called “Artificial Intelligence, Personalized Technology, and Mental Health” in Ann Arbor, Michigan.
McInnis and Provost are aiming to create an AI that can help diagnose sufferers of bipolar disorder. McInnis explained that one of the symptoms of bipolar disorder is speech patterns. An AI could potentially recognize subtle changes in speech patterns and facilitate the diagnosis of bipolar disorder. McInnis explains that a system that can pick up on psychological markers in speech could be used to create an early warning app that alerts sufferers and their loved ones that an episode could be impending.
Relatives of those who suffer from bipolar disorder could relax and go about their day knowing that they will be notified if signs of an impending bipolar episode have been noticed by the AI. Meanwhile, the system could help bipolar sufferers gain more independence and enable them to get prompt help once notified about a possible bipolar episode.
“Your device can give an alert and say, ‘Maybe you should talk to your doctor soon’. You can share this information with your care team, with your support network, so that you can be part of a team that’s helping you stay healthy longer.”
One of the major challenges when it comes to implementing a system that relies on detecting signs of mental illness is that cultural differences around the world can impact how signs and symptoms manifest. There will be a different baseline of “normal” for different cultures. However, if given the right training data, the AI-driven diagnosis system could hopefully compensate for these differences.
The work McInnis and Provost are doing could save lives. Catching the signs of a developing mental health crisis could potentially prevent suicide attempts, as McKinnis acknowledges that around 20% of the bipolar patients he works with end up committing suicide.
Other researchers are also experimenting with using AI to help improve the treatment and diagnosis of bipolar disorder. ZDNet recently reported that Dr. Amir Dezfouli, associated with the Commonwealth Scientific and Industrial Research Organisation (CSIRO) recently created a game powered by AI that can improve the diagnosis rate of bipolar disorder and depression. According to Dezfouli, there is currently an approximately 60% chance of misdiagnosing bipolar disease as depression, but machine learning algorithms can improve the diagnosis rate.
Dezfouli and others designed a game that monitors a patient’s behavior with metrics known to predict bipolar disorder. While these metrics can be hard for even trained clinicians to interpret, the machine learning algorithms used to analyze the data successfully reduced the misdiagnosis rate to between 20% and 40%.
Meanwhile, SilverCloud Health and Microsoft have teamed up to provide better mental health care to people online. SilverCloud is a digital mental health platform with what is currently the biggest real-world patent user-base in the world, according to PharmaTimes. SilverCloud describes itself as an evidence-based mental health service that hopes to provide its users with mental health resources in an efficient manner, giving patients clinical services at an affordable price.
Microsoft will be collaborating with SilverCloud to use machine learning and AI algorithms to enable the delivery of personalized mental healthcare for users of SilverCloud Health’s services. The algorithms that will be used on SilverCloud’s platform could enable early interventions for those who suffer from mental health conditions.
Brain Cancer Detected By AI Analyzing Blood Test Results
Recently, researchers associated with the University of Strathclyde, Glasgow patented a method of analyzing blood samples to detect brain cancer. The researchers at ClinSpec Diagnostics Limited combined spectroscopy and AI algorithms to detect brain cancer based on blood biopsies. As reported by Psychology Today, The research was recently published in the journal Nature Communications, and according to the research team, the work represents a significant development in the utilization of clinical spectroscopy and AI.
The research presented in the study could make catching brain cancer much easier and simpler. Frequently occurring headaches may be a symptom of brain cancer, but even though headaches are very common, brain cancer is not. Clinicians need a better method of discerning which headaches are causes for concern and which are more benign. Doctors must be able to carry out some form of triage and reduce the amount of time and resources invested in diagnosing brain cancer with costly brain imaging scans. If a simple blood test could give clinicians reliable information that could help them diagnose cases of brain cancer, lives could be saved.
It was for this reason that the ClinSpec researchers aimed to develop an algorithm that would help doctors sort through the cases of possible brain cancer patients, distinguishing them from other causes of headaches.
One of the common methods of detecting diseases like cancer is liquid biopsy, doing biopsy on fluids of the body instead of tissue samples. The liquid biopsy market is swiftly growing, hitting an estimated $2.4 billion dollars in size according to market research from BC Research LLC. Liquid biopsy proves effective at detecting signs of cancer, as it is able to detect cell-free circulating tumor DNA, or ctDNA, and circulating tumor cells, or CRCs. However, the researchers from ClinSpec utilized a different method of analysis, doing spectroscopy on blood samples to find biochemical markers indicative of cancer.
Spectroscopy is the process of using electromagnetic radiation to find certain targeted chemical components. Light is split up into component electromagnetic frequencies, and these frequencies will react differently with different chemicals. The ClinSpec research team used infrared light to create representations of blood samples, a technique dubbed attenuated total reflection (ATR)-Fourier transform infrared (FTIR) spectroscopy. The research team stated that the technique is a non-destructive, non-invasive technique that reliably creates a biochemical profile of a sample without the need to prepare the sample extensively. The representations of the blood samples could then be analyzed for aberrations, checked for possible signs of cancer.
In order to analyze the data, a support vector machine was used to create a classification model. Support vector machines are used for classification and regression analysis, and they operate by drawing decision boundaries, or lines that separate a dataset into multiple classes. The algorithm tries to maximize the distance between the dividing line and the data points on either side of the line, and the greater the distance the more confident the classifier is.
The research team stated that their method of analysis for the blood samples was able to effectively distinguish cancer samples from non-cancer samples. There was a sensitivity rate of 93.2% and a specificity rate of 92.8%. According to MDDI Online, The researchers report that when analyzing samples from a group of 104 different patients, their AI-assisted method was able to distinguish healthy patients from cancer around 86% of the time.
The researchers explained in the study:
“This work presents a step in the translation of ATR-FTIR spectroscopy into the clinic. This step towards high-throughput analysis has implications in the field of IR spectroscopy as well as the clinical environment. Analysis of blood serum using this technique would fit ideally in the clinical pathway as a triage tool for brain cancer.”
Foodvisor App Uses Deep Learning to Monitor & Maintain Your Diet
Foodvisor, a startup that launched its new AI-based app in France in 2018 is about to change the manner in which you track and keep your diet plans. As TechCrunch explains, the Foodvisor app “helps you log everything you eat in order to lose weight, follow a diet or get healthier.” The users are also given the ability to input additional data by capturing a photo of the food you are about to eat.
The app works by using deep learning “to enable image recognition to detect what you’re about to eat. In addition to identifying the type of food, the app tries to estimate the weight of each item.” Using autofocus data, it also makes an evaluation of the distance between the plate of food and the phone it is on.
Foodvisor also allows its users to manually correct any data before the meal is logged in. For many people tracking their diet nutrition trackers turn out to be too demanding, and the idea behind Foodvisor is to make “the data entry process as seamless as possible.”
Finally, it produces a list of nutrition facts about what has just been consumed – calories, proteins, carbs, fats, fibers, and other essential information. The users can then set their own goals, log their nutritional activities and monitor their progress.
The app itself is free to use, but it also offers a premium subscription that varies between $5 and $10. These subscriptions offer more analysis and diet plans, but the main feature of these plans being “that you can chat with a registered dietitian/nutritionist directly in the app.”
So far, Foodvisor was able to gather 1.8 million downloads and is available on IOS and Android systems in French, English, German and Spanish, and has raised $1.5 million so far (€1.4 million). Co-founder and CMO Aurore Tran says the company has “enriched [its] database to better target the American market.”
The trend of using AI systems in food apps was started back in 2015 when Google started developing its Im2Calories, a system that counted calories based on Instagram photos. It was followed, as The Daily Meal reported, “researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute created Pic2Recipe, an app that uses artificial intelligence to predict ingredients and suggests similar recipes based on looking at a picture of food.”
The same team is still trying to “improve the system to understand images of food in more detail, including identifying cooking and preparation methods. They are also interested in recommending recipes based on dietary preferences and available ingredients.”
But as Ai capabilities develop, it seems that Foodvisor took the idea one step further.
AI Model Can Predict Clinical Application Of Medical Research
When it comes to biomedical research, there are hundreds of research papers being published every day. Yet it can be difficult to predict what research will make it out of the lab setting and lead to clinical applications. Recently, a machine learning model developed by the Office of Portfolio Analysis, or OPA, at the National Institutes of Health (NIH) was able to determine the likelihood of a biomedical research case being used in clinical trials or guidelines. According to the OPA, the citation of a research article in a clinical trial is an early indicator of translational progress or the use of research findings as a potential treatment for disease.
As reported by AI Trends, the researchers at the OPA created a new metric for their machine learning model to use, dubbed Approximate Potential to Translate, or APT. According to the OPA Director, George Santangelo, bio-medicinal translation can be predicted based on the reaction of the scientific community to the research papers that a project based on. Santangelo said that there are distinct trajectories for the flow of knowledge which can predict the success or failure rate of a paper influencing clinical research.
The creation of the APT metric coincides with the release of the NIH’s second version of the iCite tool. iCite is a browser-based application that provides information about journal publications based on their specific field of analysis. Moving forward, the iCite tool will return the APT values for queries.
The process of adapting laboratory research into clinical applications is a complex tasks that often takes years. Attempts have been made to expedite this process, due to the many variables involved in the task, it can be difficult to assess the translational process. As explained by Santangelo, machine learning algorithms are a powerful tool that could
enable clinicians to better understand which research papers are likely to prove useful in the clinic. As the team of researchers experimented with and refined their APT metric, useful predictive patterns began to materialize.
“I think the most important one that we focus on is the diversity of interest from across the fundamental to clinical research axis. When people across that axis — from fundamental scientists often in the same field as the work that’s being published, all the way to people in the clinic — show an interest in the form of citations in those papers, then the likelihood of eventual citation by a clinical trial or guideline is quite high.”
According to Santangelo, the selected features show genuine promise in predicting the translation from research paper to a clinical method. Data on a publication collected over at least two years from the date of publication often give accurate predictions about a paper’s eventual citation in a clinical article.
Santangelo explained that thanks to the new metric and machine learning algorithms the researchers can have more complete knowledge of what is going on in the literature and that this allows better insight into the research areas which are more likely to appeal to clinical scientists.
Santangelo also explained that their algorithms integration into the iCite tool is intended to leverage the free, open nature of the NIH’s Open Citation Collection database.
The NIH Open Citation Collection database is currently comprised of over 420 million citation links and growing. The Santangelo team’s algorithm will be presenting the APT values for these citations when iCite 2.0 launches in the future.
Many databases are restrictive and propriety, and according to Santangelo, these barriers inhibit collaborative research. Santangelo opines that there isn’t a fantastic justification for keeping the data behind a paywall and that because their algorithm is supposed to let others see the calculated APT values, it wouldn’t be beneficial to use proprietary data sources.