A new robotic system called FuseBot developed at MIT combines visual information and radio-frequency signals to find hidden items buried under a pile of objects. To find a lost item, robots must use complex reasoning about the pile and objects in it.
The researchers previously demonstrated a robotic arm that combines visual information and radio frequency (RF) signals to find hidden objects tagged with RFID tags, which reflect signals sent by an antenna. But the new system can efficiently retrieve any buried object even if the target item is not tagged. It only requires some items in the pile to have these RFID tags.
Algorithms in FuseBot
The algorithms that make up FuseBot can reason about the probable location and orientation of objects under the pile. It then discovers the most efficient way to remove obstructing objects and extract the target item. FuseBot was able to find these hidden items more efficiently than another state-of-the-art robotics system, and it did so in half the time.
The new system could be applied in areas like an e-commerce warehouse.
The research involved senior author Fadel Adib, association professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the Media Lab.
“What this paper shows, for the first time, is that the mere presence of an RFID-tagged item in the environment makes it much easier for you to achieve other tasks in a more efficient manner. We were able to do this because we added multimodal reasoning to the system — FuseBot can reason about both vision and RF to understand a pile of items,” says Adib.
Adib was joined by research assistants Tara Boroushaki, who is lead author; Laura Dodds; and Nazish Naeem.
FuseBot involves a robotic arm with an attached video camera and RF antenna to retrieve an untagged target item from a mixed pile. The system scans the pile with the camera to create a 3D model of the environment, and it sends signals from its antenna to locate RFID tags at the same time.
The radio waves can pass through most solid surfaces, enabling the robot to “see” into the pile. Since items other than the target item are tagged, FuseBot knows the target item cannot be in the same exact spot.
The information is then fused by the algorithms to update the 3D model of the environment and highlight potential locations of the target item, with the robot already knowing the size and shape of it. The system reasons about the objects in the pile and RFID tags to determine which item to move, and it looks for the path with the fewest moves.
To overcome the challenge of not knowing how objects are oriented under the pile, FuseBot uses probabilistic reasoning. Each time it removes an item, it also uses reasoning to devise which would be the next best item to remove.
“If I give a human a pile of items to search, they will most likely remove the biggest item first to see what is underneath it. What the robot does is similar, but it also incorporates RFID information to make a more informed decision. It asks, ‘How much more will it understand about this pile if it removes this item from the surface?'” Boroushaki says.
The robot scans the pile after removing an object and uses new data to optimize the strategy.
Outperforming Other Systems
By using RF signals and reasoning, FuseBot was able to outperform a state-of-the-art system that only uses vision. It extracted the target item with a 95 percent success rate, compared to 84 percent for the other system. It also did this with 40 percent fewer moves and could locate and retrieve items more than twice as fast.
“We see a big improvement in the success rate by incorporating this RF information. It was also exciting to see that we were able to match the performance of our previous system, and exceed it in scenarios where the target item didn’t have an RFID tag,” Dodds says.
The software responsible for carrying out the complex reasoning can be implemented on any computer, meaning FuseBot could be used for a wide range of settings. The team will now look to incorporate more complex models into the system, as well as explore different manipulations.
- New Tool Improves Robot Grippers for Manufacturing
- Researchers Develop Amphibious Artificial Vision System
- How to Hire the Best AI & Machine Learning Consultants (2022)
- Is DALL-E 2 Just ‘Gluing Things Together’ Without Understanding Their Relationships?
- Diagnosing Mental Health Disorders Through AI Facial Expression Evaluation