New software has been developed by researchers at North Carolina State University in order to improve robotic prosthetics or exoskeletons. The new software is able to be integrated with existing hardware, resulting in safer and more natural walking on different terrains.
The paper is titled “Environmental Context Prediction for Lower Limb Prostheses With Uncertainty Quantification.” It was published in IEEE Transactions on Automation Science and Engineering.
Adapting to Different Terrains
Edgar Lobaton is a co-author of the paper. He is an associate professor of electrical and computer engineering at the university.
“Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on,” says Lobaton. “The framework we’ve created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making.”
There were six different terrains that the researchers focused on, with each requiring adjustments in the behavior of a robotic prosthetic. They were tile, concrete, brick, grass, “upstairs,” and “downstairs.”
Boxuan Zhong is the lead author of the paper and a Ph.D. graduate from NC State.
“If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision — it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode,” says Zhong.
Incorporation of Hardware and Software Elements
The new framework relies on both hardware and software elements being incorporated together, and it is used with any lower-limb robotic exoskeleton or robotic prosthetic device.
One new aspect of this framework is a camera as another piece of hardware. In the study, cameras were worn on eyeglasses, and they were placed on the lower-limb prosthesis. The researchers then observed how AI was able to utilize computer vision data from the two different types of cameras, first separately and then together.
Helen Huang is a co-author of the paper. She is the Jackson Family Distinguished Professor of Biomedical Engineering in the Joint Department of Biomedical Engineering at NC State and the University of North Carolina at Chapel Hill.
“Incorporating computer vision into control software for wearable robotics is an exciting new area of research,” says Huang. “We found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive. However, we also found that using only the camera mounted on the lower limb worked pretty well — particularly for near-term predictions, such as what the terrain would be like for the next step or two.”
According to Lobaton, the work is applicable to any type of deep-learning system.
“We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision making,” Lobaton says. “This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system.”
Training the AI System
In order to train the AI system, the cameras were placed on able-bodied participants, who then moved through different indoor and outdoor environments. The next step was to have an individual with lower-limb amputation navigate the same environments while wearing the cameras.
“We found that the model can be appropriately transferred so the system can operate with subjects from different populations,” Lobaton says. “That means that the AI worked well even though it was trained by one group of people and used by somebody different.”
The next step is to test the framework in a robotic device.
“We are excited to incorporate the framework into the control system for working robotic prosthetics — that’s the next step,” Huang says.
The team will also work on making the system more efficient, by requiring less visual data input and data processing.