Recently, researchers associated with the University of Strathclyde, Glasgow patented a method of analyzing blood samples to detect brain cancer. The researchers at ClinSpec Diagnostics Limited combined spectroscopy and AI algorithms to detect brain cancer based on blood biopsies. As reported by Psychology Today, The research was recently published in the journal Nature Communications, and according to the research team, the work represents a significant development in the utilization of clinical spectroscopy and AI.
The research presented in the study could make catching brain cancer much easier and simpler. Frequently occurring headaches may be a symptom of brain cancer, but even though headaches are very common, brain cancer is not. Clinicians need a better method of discerning which headaches are causes for concern and which are more benign. Doctors must be able to carry out some form of triage and reduce the amount of time and resources invested in diagnosing brain cancer with costly brain imaging scans. If a simple blood test could give clinicians reliable information that could help them diagnose cases of brain cancer, lives could be saved.
It was for this reason that the ClinSpec researchers aimed to develop an algorithm that would help doctors sort through the cases of possible brain cancer patients, distinguishing them from other causes of headaches.
One of the common methods of detecting diseases like cancer is liquid biopsy, doing biopsy on fluids of the body instead of tissue samples. The liquid biopsy market is swiftly growing, hitting an estimated $2.4 billion dollars in size according to market research from BC Research LLC. Liquid biopsy proves effective at detecting signs of cancer, as it is able to detect cell-free circulating tumor DNA, or ctDNA, and circulating tumor cells, or CRCs. However, the researchers from ClinSpec utilized a different method of analysis, doing spectroscopy on blood samples to find biochemical markers indicative of cancer.
Spectroscopy is the process of using electromagnetic radiation to find certain targeted chemical components. Light is split up into component electromagnetic frequencies, and these frequencies will react differently with different chemicals. The ClinSpec research team used infrared light to create representations of blood samples, a technique dubbed attenuated total reflection (ATR)-Fourier transform infrared (FTIR) spectroscopy. The research team stated that the technique is a non-destructive, non-invasive technique that reliably creates a biochemical profile of a sample without the need to prepare the sample extensively. The representations of the blood samples could then be analyzed for aberrations, checked for possible signs of cancer.
In order to analyze the data, a support vector machine was used to create a classification model. Support vector machines are used for classification and regression analysis, and they operate by drawing decision boundaries, or lines that separate a dataset into multiple classes. The algorithm tries to maximize the distance between the dividing line and the data points on either side of the line, and the greater the distance the more confident the classifier is.
The research team stated that their method of analysis for the blood samples was able to effectively distinguish cancer samples from non-cancer samples. There was a sensitivity rate of 93.2% and a specificity rate of 92.8%. According to MDDI Online, The researchers report that when analyzing samples from a group of 104 different patients, their AI-assisted method was able to distinguish healthy patients from cancer around 86% of the time.
The researchers explained in the study:
“This work presents a step in the translation of ATR-FTIR spectroscopy into the clinic. This step towards high-throughput analysis has implications in the field of IR spectroscopy as well as the clinical environment. Analysis of blood serum using this technique would fit ideally in the clinical pathway as a triage tool for brain cancer.”
London-Based Startup LabGenius Raises $10M
The London-based startup LabGenius announced that they raised over $10 million in Series A Funding. They are a drug discovery company that utilizes artificial intelligence (AI), robotic automation, and synthetic biology. Their main focus is to find novel protein therapeutics.
According to Dr James Field, CEO and Founder of LabGenius, “Protein therapeutics have an unparalleled potential to both treat disease and alleviate human suffering. By transforming how these drugs are discovered, we have a shot at improving the lives of countless people. Being able to robustly engineer novel therapeutic proteins has immense commercial and societal value. The discovery of protein therapeutics has historically been highly artisanal, relying heavily on humans for both experimental design and execution. This dependence has proved limiting because, as a species, we’re cognitively incapable of fully grasping the complexity of biological systems.”
The Series A investment round is led by Lux Capital and Obvious Ventures. Other participants included Felicis Ventures, Inovia Capital, Air Street Capital, and other existing investors. CEO and founder of Recursion Pharmaceuticals, Chris Gibson, along with Inovia Capital General Partner Patrick Pichette, are also investing. Pichette is the former CFO of Google.
According to the company, they will use the capital to “scale their team, expand the scope of its discovery platform, and initiate an internal asset development program.” One of their main goals is to evolve novel antibody fragments. These could be used to treat certain conditions that can’t rely on conventional antibody formats.
Lux Capital and Obvious Ventures
Zavain Dar, Partner at Lux Capital, along with Nan Li, Managing Director at Obvious Ventures, have been involved in the life science startup environment for some time. Their investment strategy dates back nine years, including a 2013 investment into Zymergen, a molecule discovery and manufacture company based out of California. In 2016, they were involved in Recursion Pharmaceuticals, who went on to a series C raise of $156 million in July.
Their strategy follows a path, starting at industrial biotech technology with Zymergen and followed by root-cause biology discovery with Recursion Pharmaceuticals. It is closed out by creating composition of matter and IP with LabGenius.
Dar explained his reasoning behind choosing LabGenius over other startups.
“We scoured the globe, and didn’t want to be constrained by what happened to be in our backyard,” he says. “They are leading the pack…and now with backing and pharma partnerships, should be in a good position.”
Humans No Longer Sole Agents of Innovation
When speaking to TechCrunch, Dr James Field said, “My central thesis, the thing that’s really the driving force behind the company, is the conviction that we’re entering an age in which humans will no longer be the sole agents of innovation. Instead, new knowledge, technologies and sophisticated real-world products will be invented by smart robotic platforms called empirical computation engines. An empirical computation engine is an artificial system capable of recursively and intelligently searching a solution space.”
The company has created a discovery platform called EVA, and it integrates multiple new technologies coming from different fields including artificial intelligence. After discovery and characterisation, LabGenius then sends their proprietary molecules to clinics.
Field explains the company’s EVA technology as a “machine learning-driven, robotic platform”,” that is capable of “designing, conducting and critically learning from its own experiments.”
“For decades, scientists, engineers and technologists have dreamt of building ‘robot scientists’ capable of autonomously discovering new knowledge, technologies and sophisticated real-world products,” says Field.
“For protein engineers, that dream has now entered the realm of possibility. The rapid pace of technological development across the fields of synthetic biology, robotic automation and ML has given us access to all the essential ingredients required to create a smart robotic platform capable of intelligently discovering novel therapeutic proteins.”
Foodvisor App Uses Deep Learning to Monitor & Maintain Your Diet
Foodvisor, a startup that launched its new AI-based app in France in 2018 is about to change the manner in which you track and keep your diet plans. As TechCrunch explains, the Foodvisor app “helps you log everything you eat in order to lose weight, follow a diet or get healthier.” The users are also given the ability to input additional data by capturing a photo of the food you are about to eat.
The app works by using deep learning “to enable image recognition to detect what you’re about to eat. In addition to identifying the type of food, the app tries to estimate the weight of each item.” Using autofocus data, it also makes an evaluation of the distance between the plate of food and the phone it is on.
Foodvisor also allows its users to manually correct any data before the meal is logged in. For many people tracking their diet nutrition trackers turn out to be too demanding, and the idea behind Foodvisor is to make “the data entry process as seamless as possible.”
Finally, it produces a list of nutrition facts about what has just been consumed – calories, proteins, carbs, fats, fibers, and other essential information. The users can then set their own goals, log their nutritional activities and monitor their progress.
The app itself is free to use, but it also offers a premium subscription that varies between $5 and $10. These subscriptions offer more analysis and diet plans, but the main feature of these plans being “that you can chat with a registered dietitian/nutritionist directly in the app.”
So far, Foodvisor was able to gather 1.8 million downloads and is available on IOS and Android systems in French, English, German and Spanish, and has raised $1.5 million so far (€1.4 million). Co-founder and CMO Aurore Tran says the company has “enriched [its] database to better target the American market.”
The trend of using AI systems in food apps was started back in 2015 when Google started developing its Im2Calories, a system that counted calories based on Instagram photos. It was followed, as The Daily Meal reported, “researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute created Pic2Recipe, an app that uses artificial intelligence to predict ingredients and suggests similar recipes based on looking at a picture of food.”
The same team is still trying to “improve the system to understand images of food in more detail, including identifying cooking and preparation methods. They are also interested in recommending recipes based on dietary preferences and available ingredients.”
But as Ai capabilities develop, it seems that Foodvisor took the idea one step further.
AI Model Can Predict Clinical Application Of Medical Research
When it comes to biomedical research, there are hundreds of research papers being published every day. Yet it can be difficult to predict what research will make it out of the lab setting and lead to clinical applications. Recently, a machine learning model developed by the Office of Portfolio Analysis, or OPA, at the National Institutes of Health (NIH) was able to determine the likelihood of a biomedical research case being used in clinical trials or guidelines. According to the OPA, the citation of a research article in a clinical trial is an early indicator of translational progress or the use of research findings as a potential treatment for disease.
As reported by AI Trends, the researchers at the OPA created a new metric for their machine learning model to use, dubbed Approximate Potential to Translate, or APT. According to the OPA Director, George Santangelo, bio-medicinal translation can be predicted based on the reaction of the scientific community to the research papers that a project based on. Santangelo said that there are distinct trajectories for the flow of knowledge which can predict the success or failure rate of a paper influencing clinical research.
The creation of the APT metric coincides with the release of the NIH’s second version of the iCite tool. iCite is a browser-based application that provides information about journal publications based on their specific field of analysis. Moving forward, the iCite tool will return the APT values for queries.
The process of adapting laboratory research into clinical applications is a complex tasks that often takes years. Attempts have been made to expedite this process, due to the many variables involved in the task, it can be difficult to assess the translational process. As explained by Santangelo, machine learning algorithms are a powerful tool that could
enable clinicians to better understand which research papers are likely to prove useful in the clinic. As the team of researchers experimented with and refined their APT metric, useful predictive patterns began to materialize.
“I think the most important one that we focus on is the diversity of interest from across the fundamental to clinical research axis. When people across that axis — from fundamental scientists often in the same field as the work that’s being published, all the way to people in the clinic — show an interest in the form of citations in those papers, then the likelihood of eventual citation by a clinical trial or guideline is quite high.”
According to Santangelo, the selected features show genuine promise in predicting the translation from research paper to a clinical method. Data on a publication collected over at least two years from the date of publication often give accurate predictions about a paper’s eventual citation in a clinical article.
Santangelo explained that thanks to the new metric and machine learning algorithms the researchers can have more complete knowledge of what is going on in the literature and that this allows better insight into the research areas which are more likely to appeal to clinical scientists.
Santangelo also explained that their algorithms integration into the iCite tool is intended to leverage the free, open nature of the NIH’s Open Citation Collection database.
The NIH Open Citation Collection database is currently comprised of over 420 million citation links and growing. The Santangelo team’s algorithm will be presenting the APT values for these citations when iCite 2.0 launches in the future.
Many databases are restrictive and propriety, and according to Santangelo, these barriers inhibit collaborative research. Santangelo opines that there isn’t a fantastic justification for keeping the data behind a paywall and that because their algorithm is supposed to let others see the calculated APT values, it wouldn’t be beneficial to use proprietary data sources.
- AI System Automatically Transforms To Evade Censorship Attempts
- Optical Switch Can Reroute Light Between Chips Extremely Fast
- New AI Powered Tool Enables Video Editing From Themed Text Documents
- How we can use Deep Learning with Small Data? – Thought Leaders
- A New AI System Could Create More Hope For People With Epilepsy