Soft-bodied robots are an extremely important tool in the wider field of robotics, as traditional and rigid-bodied robots are not able to complete the same types of tasks. The former interacts with humans more safely, and they can do things like fit into tight spaces.
One of the major challenges involved with soft robots is that they must know where all of their body parts are to complete programmed tasks, and this becomes more difficult as soft robots are able to deform in almost infinite ways.
Now, researchers at MIT have developed a new deep learning algorithm that helps engineers design soft robots in a way that enables them to collect more data on their surroundings. The algorithm works by suggesting an optimized placement of sensors within the robot’s body. This enables it to complete assigned tasks while interacting with the environment.
Alexander Amini is co-lead author of the research along with Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory. The research was published in IEEE Robotics and Automation Letters with other co-authors including Lillian Chin, PhD student, and Wojciech Matusik and Daniela Rus, professors at the university.
“The system not only learns a given task, but also how to best design the robot to solve that task,” Amini says. “Sensor placement is a very difficult problem to solve. So, having the solution is extremely exciting.”
Rigid vs. Soft Robots
One of the biggest advantages of rigid robots is that they have a limited range of motion, and while this seems like a downside, it means the finite number of joints and limbs lead to more manageable calculations.
These calculations are easier to work with when it comes to algorithms controlling mapping and motion planning. Soft robots cannot do the same as they are flexible.
“The main problem with soft robots is that they are infinitely dimensional,” Spielberg says. “Any point on a soft-bodied robot can, in theory, deform in any way possible.”
In the past, researchers have used an external camera to chart the robot’s position, which is then fed back into the robot’s control program. The new team looked for a way to create a soft robot untethered from external aid.
“You can’t put an infinite number of sensors on the robot itself,” Spielberg continues. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?”
The researchers developed a novel neural network architecture that can optimize sensor placements and learn to efficiently complete tasks. They first split the robot’s body into different regions called “particles.”
The neural network used each particle’s rate of strain as an input, and through trial and error, the network can learn the most efficient sequence of movements for a given task. The network also keeps track of which particles are used more than others so that the network’s inputs can be adjusted.
Outperforming Humans in Sensor Placement
The network suggests the placement of the sensors on the robot by optimizing the most important particles. In tests, the algorithm outperformed humans when it came to locating the most efficient places to put the sensors.
The algorithm was then tested against a series of expert predictions.
“Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”
According to Spielberg, the new development could help automate the robot design process and help come up with new algorithms to control robot movements.
“…we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he says. “That’s something where you need a very robust, well-optimized sense of touch. So, there’s potential for immediate impact.”
“Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”
- Attention-Based Deep Learning Networks Could Improve Sonar Systems
- Cerebras CS-1 System Integrated Into Lassen Supercomputer
- Deepfaked Voice Enabled $35 Million Bank Heist in 2020
- Facebook: ‘Nanotargeting’ Users Based Solely on Their Perceived Interests
- IBM Announces AI-Driven Software for Environmental Intelligence