stub A New Dawn in Robotics: Touch-Based Object Rotation - Unite.AI
Connect with us

Robotics

A New Dawn in Robotics: Touch-Based Object Rotation

Published

 on

In a groundbreaking development, a team of engineers at the University of California San Diego (UCSD) has designed a robotic hand that can rotate objects using touch alone, without the need for visual input. This innovative approach was inspired by the effortless way humans handle objects without necessarily needing to see them.

A Touch-Sensitive Approach to Object Manipulation

The team equipped a four-fingered robotic hand with 16 touch sensors spread across its palm and fingers. Each sensor, costing around $12, performs a simple function: it detects whether an object is touching it or not. This approach is unique as it relies on numerous low-cost, low-resolution touch sensors that use simple binary signals—touch or no touch—to perform robotic in-hand rotation.

In contrast, other methods depend on a few high-cost, high-resolution touch sensors affixed to a small area of the robotic hand, primarily at the fingertips. Xiaolong Wang, a professor of electrical and computer engineering at UC San Diego, who led the study, explained that these approaches have several limitations. They minimize the chance that the sensors will come in contact with the object, limiting the system’s sensing ability. High-resolution touch sensors that provide information about texture are extremely difficult to simulate and are prohibitively expensive, making it challenging to use them in real-world experiments.

Rotating without Seeing: Towards In-hand Dexterity through Touch

The Power of Binary Signals

“We show that we don’t need details about an object’s texture to do this task. We just need simple binary signals of whether the sensors have touched the object or not, and these are much easier to simulate and transfer to the real world,” said Wang.

The team trained their system using simulations of a virtual robotic hand rotating a diverse set of objects, including ones with irregular shapes. The system assesses which sensors on the hand are being touched by the object at any given time point during the rotation. It also assesses the current positions of the hand’s joints, as well as their previous actions. Using this information, the system instructs the robotic hand which joint needs to go where in the next time point.

The Future of Robotic Manipulation

The researchers tested their system on the real-life robotic hand with objects that the system has not yet encountered. The robotic hand was able to rotate a variety of objects without stalling or losing its hold. The objects included a tomato, a pepper, a can of peanut butter, and a toy rubber duck, which was the most challenging object due to its shape. Objects with more complex shapes took longer to rotate. The robotic hand could also rotate objects around different axes.

The team is now working on extending their approach to more complex manipulation tasks. They are currently developing techniques to enable robotic hands to catch, throw, and juggle, for example. “In-hand manipulation is a very common skill that we humans have, but it is very complex for robots to master,” said Wang. “If we can give robots this skill, that will open the door to the kinds of tasks they can perform.”

This development marks a significant step forward in the field of robotics, potentially paving the way for robots that can manipulate objects in the dark or in visually challenging environments.

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.