The field of robotics is progressing rapidly, and it will not be long before the technology makes it into many aspects of our lives, including the kitchen. However, there is one specific hurdle that roboticists must overcome for these types of applications: robots have an extremely tough time picking up transparent and reflective objects, such as a measuring cup or shiny knife. This is changing, however, with roboticists at Carnegie Mellon University (CMU) developing a new technique to overcome this issue.
The team reported success with teaching robots to pick up those objects through a new technique that doesn’t require complex sensors, exhaustive training, or human guidance. Instead, it utilized a color camera to carry out the actions.
The research will be presented at the International Conference on Robotics and Automation virtual conference taking place this summer.
Depth Cameras vs Color Cameras
David Held is an assistant professor in CMU’s Robotics Institute. According to Held, depth cameras, which determine an object’s shape by shining an infrared light on it, are useful for identifying opaque objects. However, that is not the case for clear objects or reflective surfaces, which the infrared light passes straight through or scatters off of. Because of this, depth cameras are unable to calculate accurate shapes. This means that the result ends up being flat or shapes filled with holes for transparent and reflective objects.
The benefit of a color camera is that it can see transparent and reflective objects, not just opaque ones. Taking advantage of this, the scientists at CMU created a color camera system that is capable of identifying shapes based on color.
Even though a standard camera is not able to measure shapes the same as a depth camera can, the researchers trained the new system to imitate the depth system. This allowed it to implicitly infer shapes and grasp certain objects, and to achieve this, depth camera images of opaque objects were paired with the color images of the same objects.
Grasping Transparent and Shiny Objects
After the system was successfully trained, it was then used on transparent and shiny objects. The robot was able to grasp the difficult objects with a high degree of success after using those images plus any other information that could be extracted from the depth camera.
Held did say that while the system doesn’t always work perfectly, it is better than any of the other systems currently available.
“We do sometimes miss,” Held said. “But for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects.”
According to Thoms Weng, a Ph.D student in robotics, the system is still more efficient at picking up opaque objects compared to transparent or reflective ones, but it is much more effective than just depth camera systems. Another benefit of the system was that the learning technique to train it was extremely effective, making the color system on par with the depth camera system at picking up opaque objects.
“Our system not only can pick up individual transparent and reflective objects, but it can also grasp such objects in cluttered piles,” Weng said.
This is not the first time that roboticists attempted to overcome this challenge. Previous approaches involved training systems based entirely on repeated attempted grasps, which could number up to 800,000 attempts. Another previous option was the human labeling of objects, which is both expensive and time consuming.
The roboticsists at CMU relied on a commercial RGB-D camera capable of both color images (RGB) and depth images (D).