stub AI System Can Recognize Hand Gestures Accurately - Unite.AI
Connect with us

Robotics

AI System Can Recognize Hand Gestures Accurately

Updated on
Credit: Nanyang Technological University (NTU)

A new artificial intelligence (AI) system capable of recognizing hand gestures has been developed by scientists from Nanyang Technological University, Singapore (NTU Singapore). The technology works by combining skin-like electronics with computer vision.

The development of AI systems to recognize human hand gestures has been taking place for about 10 years, and it is currently used in surgical robots, health monitoring equipment and in gaming systems. 

The initial AI gesture recognition systems were visual-only, and inputs from wearable sensors have been integrated to improve them. This is called “data fusion’. One of the sensing abilities is called ‘somatosensory,’ and the wearable sensors can recreate it. 

Gesture recognition precision is still difficult to achieve due to the low quality of data coming from wearable sensors. This happens because of the bulkiness and poor contact with the user, as well as the effects of visually blocked objects and poor lighting. 

More challenges come from the integration of visual and sensory data, since mismatched datasets need to be processed separately and eventually merged at the end. This process is inefficient and leads to slower response times. 

The NTU team came up with a few ways to overcome these challenges, including the creation of a ‘bioinspired’ data fusion system that relies on skin-like stretchable strain sensors made from single-walled carbon nanotubes. The team also relied on AI as a way to represent how skin senses and vision are processed together in the brain.

Three neural network approaches were combined into one system in order to develop the AI system. The three types of neural networks were: a convolutional neural network, a sparse neural network, and a multilayer neural network.

By combining these three, the team could develop a system capable of more accurately recognizing human gestures compared to other methods.

Professor Chen Xiaodon is lead author of the study. He is from the School of Materials Science and Engineering at NTU. 

“Our data fusion architecture has its own unique bio-inspired features which include a human-made system resembling the somatosensory-visual fusion hierarchy in the brain. We believe such features make our architecture unique to existing approaches.”

Chen is also Director of the Innovative Centre for Flexible Devices (iFLEX) at NTU. 

“Compared to rigid wearable sensors that do not form an intimate enough contact with the user for accurate data collection, our innovation uses stretchable strain sensors that comfortably attaches onto the human skin. This allows for high-quality signal acquisition, which is vital to high-precision recognition tasks,” said Chen.

The findings from the team made up of scientists from NTU Singapore and the University of Technology Sydney (UTS) were published in June in the scientific journal Nature Electronics.

Testing the System

The team tested the bio-inspired AI system with a robot controlled through hand gestures. The robot was guided through a maze, and results demonstrated that the AI hand gesture recognition system was able to guide the robot through the maze with no errors. This compared to a visual-based recognition system, which made six errors in the same maze.

Testing under poor conditions, such as noise and bad lighting conditions, the AI system still maintained a high accuracy. The recognition accuracy rate reached over 96.7%.

Dr Wang Ming from the School of Materials Science & Engineering at NTU Singapore was first author of the study. 

“The secret behind the high accuracy in our architecture lies in the fact that the visual and somatosensory information can interact and complement each other at an early stage before carrying out complex interpretation,” Ming said. “As a result, the system can rationally collect coherent information with less redundant data and less perceptual ambiguity, resulting in better accuracy.”

According to an independent view from Professor Markus Antonietti, Director of Max Planck Institute of Colloids and Interfaces in Germany, “The findings from this paper bring us another step forward to a smarter and more machine-supported world. Much like the invention of the smartphone which has revolutionised society, this work gives us hope that we could one day physically control all of our surrounding world with great reliability and precision through a gesture.”

“There are simply endless applications for such technology in the marketplace to support this future. For example, from a remote robot control over smart workplaces to exoskeletons for the elderly.”

The research team will now work on a VR and AR system based on the bio inspired AI system.

 

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.