A team of researchers at Massachusetts Institute of Technology is “trying to give robots superhuman perception,” according to MIT Associate Professor Fadel Adib. As robots advance in areas like artificial vision, touch, and smell, they are getting closer to having human-like perception.
A new robot developed by the researchers, called RF-Grasp, relies on radio waves that can pass through walls and sense occluded objects. It does this by combining powerful sensing with traditional computer vision, enabling the location and grasping of items that would normally be out of sight for the robot.
The research is set to be presented at the IEEE International Conference on Robotics and Automation in May. The lead author of the paper is Tara Boroushaki, research assistant in the Signal Kinetics Group at the MIT Media Lab. Co-authors of the paper include Adib, director of the Signal Kinetics Group; Alberto Rodriguez, Associate Professor in the Department of Mechanical Engineering; Junshan Leng, research engineer at Harvard University; and Ian Clester, PhD student at Georgia Tech.
Warehouses and E-commerce
One potential use case of this new technology is in e-commerce, where it could help make warehouse fulfillment more efficient, or it could locate tools in a toolkit. With the dramatic rise in e-commerce, the work is becoming increasingly more intense for human workers, which still complete most of it. However, this is sometimes a problem given the dangerous working conditions.
“Perception and picking are two roadblocks in the industry today,” Rodriguez says.
Robots rely on optical vision, which can’t perceive items that are hidden since visible light waves don’t pass through walls. However, that is not the case for radio waves.
Radio frequency (RF) identification has been used for tracking, and RF identification systems consist of a reader and a tag. The tag is a tiny computer chip that is attached for tracking purposes, while the reader emits an RF signal that gets modulated by the tag and reflected to the reader.
This reflected signal is responsible for providing crucial information about the tagged item, such as location and identify. This is often used in retail supply chains, with countries like Japan planning to use RF tracking for all retail purchases eventually.
“RF is such a different sensing modality than vision,” says Rodriguez. “It would be a mistake not to explore what RF can do.”
The newly developed RF Grasp uses a camera and an RF reader to locate and grab tagged objects, and it can do so even if they are fully blocked from the camera. There is a robotic arm attached to a grasping hand, and the wrist holds the camera. The RF reader is independent of the robot, relaying tracking data to the control algorithm.
By integrating RF tracking data collection and data from the visual picture of the robot’s surroundings, the robot’s decision-making process gets very complicated.
“The robot has to decide, at each point in time, which of these streams is more important to think about,” says Boroushaki. “It’s not just eye-hand coordination, it’s RF-eye-hand coordination. So, the problem gets very complicated.“
“It starts by using RF to focus the attention of vision,” says Adib. “Then you use vision to navigate fine maneuvers.”
With this process, RF Grasp can target an object, and it can manipulate the item and vision, which results in finer detail than RF.
In a series of tests, RF Grasp successfully pinpointed and grabbed the target object with around half as much total movement. It was also able to ‘declutter’ the environment, which is very unique, by removing packing materials and obstacles to get to the target.
“It has this guidance that other systems simply don’t have,” Rodriguez says.
RF Grasp could eventually play a big role in e-commerce warehouses, doing things like instantly verifying an item’s identity.
“RF has the potential to improve some of those limitations in industry, especially in perception and localization,” Rodriguez continues.
As for home applications, Adib says “Or you could imagine the robot finding lost items. It’s like a super-Roomba that goes and retrieves my keys, wherever the heck I put them.”
- AI Solutions Shift to Internal Operations, Appen’s Annual AI Report Finds
- Josh Brenner, CEO of Hired – Interview Series
- Synthetic Data: Changing Race In Facial Images To Address Bias In Medical Datasets
- Dr Mark Goldspink, CEO of The ai Corporation (ai) – Interview Series
- ‘Green’ Intelligent Electronic Microsystem Acts Like Self-Autonomous Living Organism