A team of researchers at Columbia Engineering led by Sunil Agrawal, professor of mechanical engineering and of rehabilitation and regenerative medicine at Columbia Engineering, have turned a simple cane into a robotic device with light-touch assistance. The new device, called CANINE, can be used to assist elderly people and those with impaired mobility. The team of researchers added electronics and computation technology to the classic cane. The study has been published in the IEEE Robotics and Automation Letters.
The team has shown how an autonomous robot can “walk” with the human and provides light-touch support. It’s similar to how a person, whenever attempting to gain there balance, touches a person next to them for support.
Sunil Agrawal spoke about the new technology used to help assist those with mobility problems. He is also a member of Columbia University’s Data Science Institute.
“Often, elderly people benefit from light hand-holding for support,” he said. “We have developed a robotic cane attached to a mobile robot that automatically tracks a walking person and moves alongside. The subjects walk on a mat instrumented with sensors while the mat records step length and walking rhythm, essentially the space and time parameters of walking, so that we can analyze a person’s gait and the effects of light touch on it.”
The robotic cane, or CANINE, is a type of mobile assistant. It is able to help a person’s proprioception, which is self-awareness during various activities such as walking. This will help the stability and balance of the individual.
Joel Stein, a Simon Brauch Professor of Physical Medicine and Rehabilitation and co-author of the study, spoke about the new technology. Stein is also chair of the Department of Rehabilitation and Regenerative Medicine at Columbia University Irving Medical Center.
“This is a novel approach to providing assistance and feedback for individuals as they navigate their environment,” Stein said. “This strategy has potential applications for a variety of conditions, especially individuals with gait disorders.”
The team tested the new CANINE device with 12 healthy young people. They were given virtual reality glasses that were used to create a visual environment, an environment that shakes the user side-to-side and forward-backing. This causes them to become unbalanced.
After being shaken around, the individuals walked 10 laps on the instrument mat. They were without the CANINE device at first, but they used it the second time. Their walking was tested with the visual perturbations, and the team of researchers found that the light-touch support of the CANINE device helped the individuals narrow their strides. Narrower strides meant a decrease in the base of support. This resulted in a smaller oscillation of the center of mass and an increase in stability when the individuals were walking.
“The next phase in our research will be to test this device on elderly individuals and those with balance and gait deficits to study how the robotic cane can improve their gait,” said Agrawal. “In addition, we will conduct new experiments with healthy individuals, where we will perturb their head-neck motion in addition to their vision to simulate vestibular deficits in people.”
Agrawal is also the director of Robotics and Rehabilitation (ROAR) Laboratory.
Mobility impairment is a problem for 4% of people aged 18 to 48, but it is a much bigger problem for older individuals. 35% of people between the ages of 75 and 80 years suffer from mobility impairment. This causes a lack of independence as well as a lower quality of life.
As the population continues to age and there is a higher amount of older people compared to younger, this problem will increase.
“We will need other avenues of support for an aging population.” Agrawal said. “This is one technology that has the potential to fill the gap in care fairly inexpensively.”
Brain Cancer Detected By AI Analyzing Blood Test Results
Recently, researchers associated with the University of Strathclyde, Glasgow patented a method of analyzing blood samples to detect brain cancer. The researchers at ClinSpec Diagnostics Limited combined spectroscopy and AI algorithms to detect brain cancer based on blood biopsies. As reported by Psychology Today, The research was recently published in the journal Nature Communications, and according to the research team, the work represents a significant development in the utilization of clinical spectroscopy and AI.
The research presented in the study could make catching brain cancer much easier and simpler. Frequently occurring headaches may be a symptom of brain cancer, but even though headaches are very common, brain cancer is not. Clinicians need a better method of discerning which headaches are causes for concern and which are more benign. Doctors must be able to carry out some form of triage and reduce the amount of time and resources invested in diagnosing brain cancer with costly brain imaging scans. If a simple blood test could give clinicians reliable information that could help them diagnose cases of brain cancer, lives could be saved.
It was for this reason that the ClinSpec researchers aimed to develop an algorithm that would help doctors sort through the cases of possible brain cancer patients, distinguishing them from other causes of headaches.
One of the common methods of detecting diseases like cancer is liquid biopsy, doing biopsy on fluids of the body instead of tissue samples. The liquid biopsy market is swiftly growing, hitting an estimated $2.4 billion dollars in size according to market research from BC Research LLC. Liquid biopsy proves effective at detecting signs of cancer, as it is able to detect cell-free circulating tumor DNA, or ctDNA, and circulating tumor cells, or CRCs. However, the researchers from ClinSpec utilized a different method of analysis, doing spectroscopy on blood samples to find biochemical markers indicative of cancer.
Spectroscopy is the process of using electromagnetic radiation to find certain targeted chemical components. Light is split up into component electromagnetic frequencies, and these frequencies will react differently with different chemicals. The ClinSpec research team used infrared light to create representations of blood samples, a technique dubbed attenuated total reflection (ATR)-Fourier transform infrared (FTIR) spectroscopy. The research team stated that the technique is a non-destructive, non-invasive technique that reliably creates a biochemical profile of a sample without the need to prepare the sample extensively. The representations of the blood samples could then be analyzed for aberrations, checked for possible signs of cancer.
In order to analyze the data, a support vector machine was used to create a classification model. Support vector machines are used for classification and regression analysis, and they operate by drawing decision boundaries, or lines that separate a dataset into multiple classes. The algorithm tries to maximize the distance between the dividing line and the data points on either side of the line, and the greater the distance the more confident the classifier is.
The research team stated that their method of analysis for the blood samples was able to effectively distinguish cancer samples from non-cancer samples. There was a sensitivity rate of 93.2% and a specificity rate of 92.8%. According to MDDI Online, The researchers report that when analyzing samples from a group of 104 different patients, their AI-assisted method was able to distinguish healthy patients from cancer around 86% of the time.
The researchers explained in the study:
“This work presents a step in the translation of ATR-FTIR spectroscopy into the clinic. This step towards high-throughput analysis has implications in the field of IR spectroscopy as well as the clinical environment. Analysis of blood serum using this technique would fit ideally in the clinical pathway as a triage tool for brain cancer.”
Foodvisor App Uses Deep Learning to Monitor & Maintain Your Diet
Foodvisor, a startup that launched its new AI-based app in France in 2018 is about to change the manner in which you track and keep your diet plans. As TechCrunch explains, the Foodvisor app “helps you log everything you eat in order to lose weight, follow a diet or get healthier.” The users are also given the ability to input additional data by capturing a photo of the food you are about to eat.
The app works by using deep learning “to enable image recognition to detect what you’re about to eat. In addition to identifying the type of food, the app tries to estimate the weight of each item.” Using autofocus data, it also makes an evaluation of the distance between the plate of food and the phone it is on.
Foodvisor also allows its users to manually correct any data before the meal is logged in. For many people tracking their diet nutrition trackers turn out to be too demanding, and the idea behind Foodvisor is to make “the data entry process as seamless as possible.”
Finally, it produces a list of nutrition facts about what has just been consumed – calories, proteins, carbs, fats, fibers, and other essential information. The users can then set their own goals, log their nutritional activities and monitor their progress.
The app itself is free to use, but it also offers a premium subscription that varies between $5 and $10. These subscriptions offer more analysis and diet plans, but the main feature of these plans being “that you can chat with a registered dietitian/nutritionist directly in the app.”
So far, Foodvisor was able to gather 1.8 million downloads and is available on IOS and Android systems in French, English, German and Spanish, and has raised $1.5 million so far (€1.4 million). Co-founder and CMO Aurore Tran says the company has “enriched [its] database to better target the American market.”
The trend of using AI systems in food apps was started back in 2015 when Google started developing its Im2Calories, a system that counted calories based on Instagram photos. It was followed, as The Daily Meal reported, “researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute created Pic2Recipe, an app that uses artificial intelligence to predict ingredients and suggests similar recipes based on looking at a picture of food.”
The same team is still trying to “improve the system to understand images of food in more detail, including identifying cooking and preparation methods. They are also interested in recommending recipes based on dietary preferences and available ingredients.”
But as Ai capabilities develop, it seems that Foodvisor took the idea one step further.
AI Model Can Predict Clinical Application Of Medical Research
When it comes to biomedical research, there are hundreds of research papers being published every day. Yet it can be difficult to predict what research will make it out of the lab setting and lead to clinical applications. Recently, a machine learning model developed by the Office of Portfolio Analysis, or OPA, at the National Institutes of Health (NIH) was able to determine the likelihood of a biomedical research case being used in clinical trials or guidelines. According to the OPA, the citation of a research article in a clinical trial is an early indicator of translational progress or the use of research findings as a potential treatment for disease.
As reported by AI Trends, the researchers at the OPA created a new metric for their machine learning model to use, dubbed Approximate Potential to Translate, or APT. According to the OPA Director, George Santangelo, bio-medicinal translation can be predicted based on the reaction of the scientific community to the research papers that a project based on. Santangelo said that there are distinct trajectories for the flow of knowledge which can predict the success or failure rate of a paper influencing clinical research.
The creation of the APT metric coincides with the release of the NIH’s second version of the iCite tool. iCite is a browser-based application that provides information about journal publications based on their specific field of analysis. Moving forward, the iCite tool will return the APT values for queries.
The process of adapting laboratory research into clinical applications is a complex tasks that often takes years. Attempts have been made to expedite this process, due to the many variables involved in the task, it can be difficult to assess the translational process. As explained by Santangelo, machine learning algorithms are a powerful tool that could
enable clinicians to better understand which research papers are likely to prove useful in the clinic. As the team of researchers experimented with and refined their APT metric, useful predictive patterns began to materialize.
“I think the most important one that we focus on is the diversity of interest from across the fundamental to clinical research axis. When people across that axis — from fundamental scientists often in the same field as the work that’s being published, all the way to people in the clinic — show an interest in the form of citations in those papers, then the likelihood of eventual citation by a clinical trial or guideline is quite high.”
According to Santangelo, the selected features show genuine promise in predicting the translation from research paper to a clinical method. Data on a publication collected over at least two years from the date of publication often give accurate predictions about a paper’s eventual citation in a clinical article.
Santangelo explained that thanks to the new metric and machine learning algorithms the researchers can have more complete knowledge of what is going on in the literature and that this allows better insight into the research areas which are more likely to appeal to clinical scientists.
Santangelo also explained that their algorithms integration into the iCite tool is intended to leverage the free, open nature of the NIH’s Open Citation Collection database.
The NIH Open Citation Collection database is currently comprised of over 420 million citation links and growing. The Santangelo team’s algorithm will be presenting the APT values for these citations when iCite 2.0 launches in the future.
Many databases are restrictive and propriety, and according to Santangelo, these barriers inhibit collaborative research. Santangelo opines that there isn’t a fantastic justification for keeping the data behind a paywall and that because their algorithm is supposed to let others see the calculated APT values, it wouldn’t be beneficial to use proprietary data sources.