Connect with us

Healthcare

Robotic Cane Helps Individuals with Impaired Mobility

Published

 on

Robotic Cane Helps Individuals with Impaired Mobility

A team of researchers at Columbia Engineering led by Sunil Agrawal, professor of mechanical engineering and of rehabilitation and regenerative medicine at Columbia Engineering, have turned a simple cane into a robotic device with light-touch assistance. The new device, called CANINE, can be used to assist elderly people and those with impaired mobility. The team of researchers added electronics and computation technology to the classic cane. The study has been published in the IEEE Robotics and Automation Letters. 

The team has shown how an autonomous robot can “walk” with the human and provides light-touch support. It’s similar to how a person, whenever attempting to gain there balance, touches a person next to them for support. 

Sunil Agrawal spoke about the new technology used to help assist those with mobility problems. He is also a member of Columbia University’s Data Science Institute. 

“Often, elderly people benefit from light hand-holding for support,” he said. “We have developed a robotic cane attached to a mobile robot that automatically tracks a walking person and moves alongside. The subjects walk on a mat instrumented with sensors while the mat records step length and walking rhythm, essentially the space and time parameters of walking, so that we can analyze a person’s gait and the effects of light touch on it.” 

The robotic cane, or CANINE, is a type of mobile assistant. It is able to help a person’s proprioception, which is self-awareness during various activities such as walking. This will help the stability and balance of the individual. 

Joel Stein, a Simon Brauch Professor of Physical Medicine and Rehabilitation and co-author of the study, spoke about the new technology. Stein is also chair of the Department of Rehabilitation and Regenerative Medicine at Columbia University Irving Medical Center. 

“This is a novel approach to providing assistance and feedback for individuals as they navigate their environment,” Stein said. “This strategy has potential applications for a variety of conditions, especially individuals with gait disorders.” 

The team tested the new CANINE device with 12 healthy young people. They were given virtual reality glasses that were used to create a visual environment, an environment that shakes the user side-to-side and forward-backing. This causes them to become unbalanced. 

After being shaken around, the individuals walked 10 laps on the instrument mat. They were without the CANINE device at first, but they used it the second time. Their walking was tested with the visual perturbations, and the team of researchers found that the light-touch support of the CANINE device helped the individuals narrow their strides. Narrower strides meant a decrease in the base of support. This resulted in a smaller oscillation of the center of mass and an increase in stability when the individuals were walking. 

“The next phase in our research will be to test this device on elderly individuals and those with balance and gait deficits to study how the robotic cane can improve their gait,” said Agrawal. “In addition, we will conduct new experiments with healthy individuals, where we will perturb their head-neck motion in addition to their vision to simulate vestibular deficits in people.” 

Agrawal is also the director of Robotics and Rehabilitation (ROAR) Laboratory. 

Mobility impairment is a problem for 4% of people aged 18 to 48, but it is a much bigger problem for older individuals. 35% of people between the ages of 75 and 80 years suffer from mobility impairment. This causes a lack of independence as well as a lower quality of life. 

As the population continues to age and there is a higher amount of older people compared to younger, this problem will increase. 

“We will need other avenues of support for an aging population.” Agrawal said. “This is one technology that has the potential to fill the gap in care fairly inexpensively.”

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Healthcare

Team Develops Blood-Sampling Robot 

Published

on

Team Develops Blood-Sampling Robot 

A blood-sampling robot, able to perform as well or better than humans, has been developed by a team at Rutgers University. It was tested during the first human clinical trial of an automated blood drawing and testing device. 

Because the device can deliver quicker results, healthcare professionals would not have to spend so much time sampling blood. It would allow them to focus more on the treatment of patients within hospitals and other settings. 

The results were published in the journal Technology, and they were comparable to or exceeded clinical standards. The overall success rate for the 31 participants who had their blood drawn was 87%. 25 people had veins that were easier to access, and that success rate was 97%. 

Within the device is an ultrasound image-guided robot that draws blood from veins. One of the possible developments is a fully integrated device that includes a module to handle samples and a centrifuge-based blood analyzer. This could be used in ambulances, emergency rooms, clinics, doctors’ offices, hospitals, and bedsides. 

The most common clinical procedure, numbered at more than 1.4 billion performed daily in the United States, is Venipuncture. This is a process that involves inserting a needle into a vein to get a blood sample or perform IV therapy. However, previous studies have shown that clinicians fail in 27% of patients without visible veins, 40% of patients without palpable veins and 60% of emaciated patients.

With the repeated failure to start an IV line boost comes an increased risk of phlebitis, thrombosis, and infections. It could also require the targeting of large veins in the body or arteries, and this is riskier and more costly. Because of this, venipuncture is one of the leading causes of injury to patients and clinicians. Other problems associated with difficulty accessing veins are that it can increase procedure time by up to an hour, it requires more staff and the estimated costs are more than $4 billion a year in the United States. 

Josh Leipheimer is a biomedical engineering doctoral student in the Yarmush lab in the biomedical engineering department in the School of Engineering at Rutgers University-New Brunswick.

“A device like ours could help clinicians get blood samples quickly, safely and reliably, preventing unnecessary complications and pain in patients from multiple needle insertion attempts,” Leipheimer said. 

The team hopes that the device can eventually be used in procedures such as IV catheterization, central venous access, dialysis and placing arterial lines. They will now work to refine the device and increase success rates in those patients with difficult veins to access. 

In order to improve its performance, data will be taken form this study and used to enhance artificial intelligence in the robot. 

The Rutgers co-authors include Max L. Balter and Alvin I. Chen, both graduates with doctorates; Enrique J. Pantin at Rutgers Robert Wood Johnson Medical School; Professor Kristen S. Labazzo; and principal investigator Martin L. Yarmush, the Paul and Mary Monroe Endowed Chair and Distinguished Professor in the Department of Biomedical Engineering. The study also had contributions from a researcher at Icahn School of Medicine at Mount Sinai Hospital. 

The newly developed device by the team at Rutgers is just another example of how robotics and artificial intelligence are overtaking the healthcare industry. These devices will greatly assist those working within healthcare, and they will make procedures and other forms of care much more successful.

 

Spread the love
Continue Reading

Healthcare

How AI Is Being Used In The Fight Against The Wuhan Coronavirus

mm

Published

on

How AI Is Being Used In The Fight Against The Wuhan Coronavirus

Artificial intelligence is being leveraged in the fight against the Wuhan Coronavirus. Artificial intelligence as being employed by researchers track the spread of the disease and to research potential treatments for the virus.

The Wuhan Coronavirus manifested in China in December, and in the two months since then it has spread across China and to other parts of the globe. It’s still unknown just how contagious the virus is and how quickly the virus could spread, although there are currently more than 40,000 confirmed cases within China. In order to get a better understanding of how the virus might spread and how fast the virus can spread, researchers are employing machine learning algorithms focused on data pulled from social media sites and other parts of the web.

Over the course of the past week,  the rate of infection seems to have decreased somewhat, but it’s unclear if the disease is falling under control or if new cases are becoming harder to detect.  While other countries around the world have only seen a few cases of coronavirus, in comparison to China, the world health community remains concerned about the virus’s ability to spread. Researchers are trying to get ahead of the viruses’ spread by using machine learning and big data collected from the internet.

As reported by Wired, an international team of researchers have extracted data from various parts of the internet, including posts from doctors and medical groups, public health channels, social media posts, and news reports, compiling a database of text that might relate to the coronavirus.  The researchers then analyze the data for signs that the virus could be spreading outside of China’s borders, making use of machine learning techniques in order to find relevant patterns in the data that could hint at how the virus is behaving.

The researchers sift through social media posts looking for potential symptoms of coronavirus, centering their search on regions where doctors think cases may manifest. The social media posts are processed using natural language processing techniques, techniques which can distinguish between posts where a person mentions their own symptoms versus someone saying symptom-related words in another context (such as discussing news about the coronavirus).

According to Alessandro Vespignani, as Wired reported, Northeastern University professor and expert contagion analyst, argues that even with advanced machine learning techniques it’s often difficult to track the spread of the virus because the characteristics of the virus are still somewhat unknown, and most social media posts are from media companies and currently about the outbreak in China. However, Vesignani believes that if the virus ever did take hold in the US it would become easier to monitor thanks to more posts concerning the virus.

Despite the challenge in gaining relevant information about the potential behavior of the coronavirus, the model created by the researchers does seem to be reasonably effective at finding clues within a large sea of social media posts. The model used by the researchers was able to find evidence of a viral outbreak on December 30th, although it took time to determine just how serious the situation would become. Crowdsourced information could improve the effectiveness of disease tracking models even further, as it enables the more efficient collection of relevant data regarding the virus. As an example, an analysis of data crowdsourced by Chinese physicians suggests that people younger than 15 years of age are more resilient to the virus.

Artificial intelligence can also be combined with data collected from mobile devices to build models that can potentially predict the direction a virus is spreading as well as the rate of a spread. For instance, Researchers from University of Southampton used mobile data to determine the path that the virus may have taken as it moved out of Wuhan in the days following its manifestation. Other researchers analyzed data collected by Tencent, a Chinese mobile app developer, and found that the restrictions imposed by the Chinese government potentially reduce the virus’ spread, buying vital time to develop a plan of attack.

As Fortune reported, the startup Insilico Medicine has made use of artificial intelligence to identify molecules that could potentially treat the coronavirus. Insilico’s AI identified thousands of possible drug molecules over the course of four days. Insilico explained that the 100 most promising candidates will be synthesized and all of their research on molecular structures will be published for other researchers to take advantage of. Medical researchers and companies are fast-tracking the development and testing of treatments, with the US-based biotech company Gilead planning to start the immediate testing of a new antiviral drug within the Wuhan region.

After Insilico decided to begin researching treatments, it focused its research on an enzyme called 3C-like protease. The coronavirus relies on this enzyme to reproduce and spread. According to Insilico, it decided on this specific enzyme because it’s quite similar to other viral proteases whose structures have already been documented, and because Shanghai Tech University had developed a model of the 2019-nCoV 3C-like protease. In the span of four days Insilico was able to generate hundreds of thousands of candidate molecules and choose only the hundred or so that were most likely to be useful. The results of the research were recently published in the repository bioRxiv and on Insilico’s website.

Spread the love
Continue Reading

Healthcare

AI Being Used to Analyze Retinal Images

Published

on

AI Being Used to Analyze Retinal Images

In a newly developed approach, artificial intelligence (AI) is being used to analyze retinal images. The system could be used by doctors in order to select the best treatment for patients suffering from vision loss from diabetic macular edema, a diabetes complication. That problem often leads to vision loss among working-age adults.

One of the first types of therapy that is often used as a line of defense against diabetic macular edema is anti-vascular endothelial growth factor (VEGF). The problem with VEGF agents is that they do not work for everyone. Those who could benefit from the therapy need to be identified first since it requires multiple injections. Those injections cost a lot, and they are burdensome for both patients and physicians.

The leader of the research team is Sina Farsiu from Duke University.

“We developed an algorithm that can be used to automatically analyze optical coherence tomography (OCT) images of the retina to predict whether a patient is likely to respond to anti-VEGF treatments,” she said. “This research represents a step toward precision medicine, in which such predictions help clinicians better select first-line therapies for patients based on specific disease conditions.”

The work was published in The Optical Society (OSA) journal Biomedical Optics Express. In the journal, Farsiu and her team demonstrated how the new algorithm is capable of accurately predicting whether a patient is likely to respond to anti-VEGF therapy, after just one volumetric scan.

“Our approach could potentially be used in eye clinics to prevent unnecessary and costly trial-and-error treatments and thus alleviate a substantial treatment burden for patients,” Farsiu said. “The algorithm could also be adapted to predict therapy response for many other eye diseases, including neovascular age-related macular degeneration.”

The newly developed algorithm is based on a novel convolutional neural network (CNN) architecture. A CNN is a type of artificial intelligence, and it assigns importance to various aspects or objects in order to analyze images. The algorithm was used by the researchers to examine images acquired with OCT, which is a noninvasive technology. OCT produces high-resolution cross-sectional retinal images, and it is considered the standard of care for the assessment and treatment of various eye conditions.

“Unlike previously developed approaches, our algorithm requires OCT images from only a single pretreatment timepoint,” said Reza Rasti, first author of the paper and a postdoctoral scholar in Farsiu’s laboratory. “There’s no need for time-series OCT images, patient records or other metadata to predict therapy response.”

The new algorithm works by highlighting global structures in the OCT. At the same time, it also enhances local features from diseased regions. It searches for CNN-encoded features that can be correlated with anti-VEGF response. 

The algorithm was tested with OCT images from 127 patients who had undergone treatment for diabetic macular edema with three consecutive injections of anti-VEGF agents. The algorithm then analyzed OCT images that were taken prior to the anti-VEGF injections, and the algorithm’s predictions were compared to OCT images taken after anti-VEGF therapy. This told researchers whether or not the therapy resulted in an improvement of the condition. 

The algorithm was found to have an 87 percent accuracy rate for predicting those who would respond to treatment. It had an average precision and specificity of 85 percent and a sensitivity of 80 percent.

The researchers now want to confirm the findings and undertake a larger observational trial of patients who have yet to go through treatment.

 

Spread the love
Continue Reading