Connect with us

Robotics

AI System Can Recognize Hand Gestures Accurately

Updated

 on

Credit: Nanyang Technological University (NTU)

A new artificial intelligence (AI) system capable of recognizing hand gestures has been developed by scientists from Nanyang Technological University, Singapore (NTU Singapore). The technology works by combining skin-like electronics with computer vision.

The development of AI systems to recognize human hand gestures has been taking place for about 10 years, and it is currently used in surgical robots, health monitoring equipment and in gaming systems. 

The initial AI gesture recognition systems were visual-only, and inputs from wearable sensors have been integrated to improve them. This is called “data fusion’. One of the sensing abilities is called ‘somatosensory,’ and the wearable sensors can recreate it. 

Gesture recognition precision is still difficult to achieve due to the low quality of data coming from wearable sensors. This happens because of the bulkiness and poor contact with the user, as well as the effects of visually blocked objects and poor lighting. 

More challenges come from the integration of visual and sensory data, since mismatched datasets need to be processed separately and eventually merged at the end. This process is inefficient and leads to slower response times. 

The NTU team came up with a few ways to overcome these challenges, including the creation of a ‘bioinspired’ data fusion system that relies on skin-like stretchable strain sensors made from single-walled carbon nanotubes. The team also relied on AI as a way to represent how skin senses and vision are processed together in the brain.

Three neural network approaches were combined into one system in order to develop the AI system. The three types of neural networks were: a convolutional neural network, a sparse neural network, and a multilayer neural network.

By combining these three, the team could develop a system capable of more accurately recognizing human gestures compared to other methods.

Professor Chen Xiaodon is lead author of the study. He is from the School of Materials Science and Engineering at NTU. 

“Our data fusion architecture has its own unique bio-inspired features which include a human-made system resembling the somatosensory-visual fusion hierarchy in the brain. We believe such features make our architecture unique to existing approaches.”

Chen is also Director of the Innovative Centre for Flexible Devices (iFLEX) at NTU. 

“Compared to rigid wearable sensors that do not form an intimate enough contact with the user for accurate data collection, our innovation uses stretchable strain sensors that comfortably attaches onto the human skin. This allows for high-quality signal acquisition, which is vital to high-precision recognition tasks,” said Chen.

The findings from the team made up of scientists from NTU Singapore and the University of Technology Sydney (UTS) were published in June in the scientific journal Nature Electronics.

Testing the System

The team tested the bio-inspired AI system with a robot controlled through hand gestures. The robot was guided through a maze, and results demonstrated that the AI hand gesture recognition system was able to guide the robot through the maze with no errors. This compared to a visual-based recognition system, which made six errors in the same maze.

Testing under poor conditions, such as noise and bad lighting conditions, the AI system still maintained a high accuracy. The recognition accuracy rate reached over 96.7%.

Dr Wang Ming from the School of Materials Science & Engineering at NTU Singapore was first author of the study. 

“The secret behind the high accuracy in our architecture lies in the fact that the visual and somatosensory information can interact and complement each other at an early stage before carrying out complex interpretation,” Ming said. “As a result, the system can rationally collect coherent information with less redundant data and less perceptual ambiguity, resulting in better accuracy.”

According to an independent view from Professor Markus Antonietti, Director of Max Planck Institute of Colloids and Interfaces in Germany, “The findings from this paper bring us another step forward to a smarter and more machine-supported world. Much like the invention of the smartphone which has revolutionised society, this work gives us hope that we could one day physically control all of our surrounding world with great reliability and precision through a gesture.”

“There are simply endless applications for such technology in the marketplace to support this future. For example, from a remote robot control over smart workplaces to exoskeletons for the elderly.”

The research team will now work on a VR and AR system based on the bio inspired AI system.

 

Spread the love

Robotics

Researchers Develop New Theory on Animal Sensing Which Could be Used in Robotics

Updated

 on

All animals, from insects to humans, rely on their senses as some of the most important tools for survival. Sensory organs like eyes, ears and noses are used while searching for food or detecting threats. However, the actual position and orientation of the sense organs are not intuitive, and the currently deployed theories are not able to make predictions about the position and orientation. 

That is now changing with new developments coming out of Northwestern University. A team of researchers has come up with a new theory that is in fact able to predict the movement of an animal’s sensory organs, specifically when that animal is searching for something important like food. 

The research was published Sept. 22 in the journal eLife

Energy-Constrained Proportional Betting

The newly developed theory, termed energy-constrained proportional betting, was applied to four different species of animals, and it involved three different senses, including vision and smell. The team demonstrated how the theory could predict the observed sensing behavior of each animal.

This new theory could have implications within the field of robotics, possibly improving robot performance when it comes to collecting information. It could also make a difference in the development of autonomous vehicles, specifically improving their response to uncertainty. 

Malcolm A. Maclver led the promising research. He is also a professor of biomedical and mechanical engineering in Northwestern’s McCormick School of Engineering, as well as a professor of neurobiology in the Weinberg College of Arts and Sciences. 

“Animals make their living through movement,” Maclver said. “To find food and mates and to identify threats, they need to move. Our theory provides insight into how animals gamble on how much energy to expend to get the useful information they need.”

The new theory sheds light into the different motions of sensory organs, and the resulting algorithm generated simulated sensory organ movements. These generated movements agreed with the real-life sensory organ movements from fish, mammals and insects. 

Chen Chen is a Ph.D student in Maclver’s lab and the first author, while Todd D. Murphey, professor of mechanical engineering at McCormick, is a co-author. 

Gambling Energy

Movement costs a lot of energy for animals, and they spend that energy while gambling that the locations they are moving to will be informative. The amount of food-derived energy that they are willing to spend is proportional to the expected value of those locations, according to the researchers.

“While most theories predict how an animal will behave when it largely already knows where something is, ours is a prediction for when the animal knowns very little — a situation in life and critical to survival,” Murphey says. 

The research focused on the gymnotid electric fish from South America, and experiments were performed in Maclver’s lab. It was not all new data however, as the team utilized past published datasets on the blind eastern American mole, the American cockroach and the hummingbird hawkmoth. 

The three senses that were focused on include electrosense with the electric fish, vision with the moth and smell with the mole and roach.

The newly-developed theory leads to more energy and time being preserved when moving around to gather information. At the same time, there is enough information to guide tracking and other exploratory behaviors common among animals. 

“When you look at a cat’s ears, you’ll often see them swiveling to sample different locations of space,” Maclver said. “This is an example of how animals are constantly positioning their sensory organs to help them absorb information from the environment. It turns out there is a lot going on below the surface in the movement of sense organs like ears and eyes and noses.”

 

Spread the love
Continue Reading

Robotics

Human Brain’s Light Processing Ability Could Lead to Better Robotic Sensing

Updated

 on

The human brain often serves as inspiration for artificial intelligence (AI), and that is the case once again as a team of Army researchers has managed to improve robotic sensing by looking at how the human brain processes bright and contrasting light. The new development can help lead to the collaboration between autonomous agents and humans.

According to the researchers, it is important for machine sensing to be effective across changing environments, which leads to developments in autonomy.

The research was published in the Journal of Vision

100,000-to-1 Display Capability

Andre Harrison is a researcher at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. 

“When we develop machine learning algorithms, real-world images are usually compressed to a narrower range, as a cellphone camera does, in a process called tone mapping,” Harrison said. “This can contribute to the brittleness of machine vision algorithms because they are based on artificial images that don’t quite match the patterns we see in the real world.” 

The team of researchers developed a system with 100,000-to-1 display capability, which enabled them to gain insight into the brain’s computing process in the real-world. According to Harrison, this allowed the team to implement biological resilience into sensors.

The current vision algorithms still have a long way to go before becoming ideal. This has to do with the limited range in luminance, at around 100-to-1 ratio, due to the algorithms being based on human and animal studies with computer monitors. The 100-to-1 ratio is less-than-ideal in the real world, where the variation can go all the way up to 100,000-to-1. This high ratio is termed high dynamic range, or HDR.

Dr. Chou Po Hung is an Army researcher. 

“Changes and significant variations in light can challenge Army systems — drones flying under a forest canopy could be confused by reflectance changes when wind blows through the leaves, or autonomous vehicles driving on rough terrain might not recognize potholes nor other obstacles because the lighting conditions are slightly different from those of which their vision algorithms were trained,” Hung said. 

The Human Brain’s Compressing Capability

The human brain is capable of automatically compressing the 100,000-to-1 input into a narrower range, and this is what allows humans to interpret shape. The team of researchers set out to understand this process by studying early visual processing under HDR. The team looked toward simple features such as HDR luminance. 

“The brain has more than 30 visual areas, and we still have only a rudimentary understanding of how these areas process the eye’s image into an understanding of 3D shape,” Hung continued. “Our results with HDR luminance studies, based on human behavior and scalp recordings, show just how little we truly know about how to bridge the gap from laboratory to real-world environments. But, these findings break us out of that box, showing that our previous assumptions from standard computer monitors have limited ability to generalize to the real world, and they reveal principles that can guide our modeling toward the correct mechanisms.” 

By discovering how light and contrast edges interact in the brain’s visual representation, algorithms will be more effective at reconstructing the 3D world under real-world luminance. When estimating 3D shape from 2D information, there are always ambiguities, but this new discovery allows for them to be corrected.

“Through millions of years of evolution, our brains have evolved effective shortcuts for reconstructing 3D from 2D information,” Hung said. “It’s a decades-old problem that continues to challenge machine vision scientists, even with the recent advances in AI.”

The team’s discovery is also important for the development of AI-devices like radar and remote speech understanding, which utilize wide dynamic range sensing. 

“The issue of dynamic range is not just a sensing problem,” Hung said. “It may also be a more general problem in brain computation because individual neurons have tens of thousands of inputs. How do you build algorithms and architectures that can listen to the right inputs across different contexts? We hope that, by working on this problem at a sensory level, we can confirm that we are on the right track, so that we can have the right tools when we build more complex Als.” 

Spread the love
Continue Reading

Robotics

Researchers Develop First Microscopic Robots Capable of “Walking”

Updated

 on

Image: Cornell University

In what is a breakthrough within the field of robotics, researchers have created the first microscopic robots capable of being controlled through their incorporated semiconductor components. The robots are able to “walk” with only standard electronic signals.

The microscopic robots are the size of a paramecium, and they will act as the foundation for further projects. Some of those could include complex versions with silicon-based intelligence, the mass production of such robots, and versions capable of moving through human tissue and blood.

The work was a collaboration led by Cornell University, which included Itai Cohen, professor of physics. Other members of the team included Paul McEuen, the John A. Newman Professor of Physical Science, as well as Marc Miskin, assistant professor at the University of Pennsylvania.

Their work was published Aug. 26 in Nature, titled “Electronically Integrated, Mass-Manufactured, Microscopic Robots.”

Previous Nanoscale Projects

The newly developed microscopic robots were built upon previous work done by Cohen and McEuen. Some of their previous nanoscale projects involved microscopic sensors and graphene-based origami machines. 

The new microscopic robots are approximately 5 microns thick, 40 microns wide, and anywhere between 40 to 70 microns in length. One micron is just one-millionth of a meter. 

Each robot has a simple circuit that is made from silicon photovoltaics and four electrochemical actuators. The silicon photovoltaics act as the torso and brain, while the electrochemical actuators act as the legs.

Controlling the Microscopic Robots

In order to control the robots, the researchers flash laser pulses at different photovoltaics, with each one making up a seperate set of legs. The robots are able to walk when the laser is toggled back and forth between the front and back photovoltaics.

The robots only operate at a low voltage of 200 millivolts, and they run on just 10 nanowatts of power. The material is strong for such a small object, and they are able to be fabricated parallel since they are constructed with standard lithographic processes. On just a four inch silicon wafer, there can be around 1 million bots.

The team is now looking at how to make the robots more powerful through electronics and onboard computation. 

It is possible that future versions of microrobots could act in swarms and complete tasks like restructuring materials, suturing blood vessels, or be sent to the human brain. 

“Controlling a tiny robot is maybe as close as you can come to shrinking yourself down. I think machines like these are going to take us into all kinds of amazing worlds that are too small to see,” said Miskin.

“This research breakthrough provides exciting scientific opportunity for investigating new questions relevant to the physics of active matter and may ultimately lead to futuristic robotic materials,” said Sam Stanton. 

Stanton is program manager for the Army Research Office, which supported the microscopic robot research. 

A video of Itai Cohen explaining the technology can be found here.

Spread the love
Continue Reading