stub Advancements in AR and VR with New Wrist Camera - Unite.AI
Connect with us

Augmented Reality

Advancements in AR and VR with New Wrist Camera

Published

 on

Augmented reality (AR) and virtual reality (VR) have advanced one step further with the creation of a new wrist-worn device for 3D hand pose estimation. The device was created by researchers at Tokyo Institute of Technology (Tokyo Tech), along with teams from Carnegie Mellon University, the University of St Andrews and the University of New South Wales. 

The major point of the new system is a camera that is capable of capturing images on the back of the hand. It does this through the use of a neural network called DorsalNet. The neural network is able to identify dynamic gestures.

The use of AR and VR devices is increasing, especially in industries such as health, sports and entertainment. This new development will help move the industry away from the more bulky methods currently relied on, which include big gloves that make it hard for natural movement. 

3D Hand Pose Recognition System

The research team was led by Hideki Koike at Tokyo Tech. 

According to the researchers, “This work is the first vision-based real-time 3D hand pose estimator using visual features from the dorsal hand region. The system consists of a camera supported by a neural network named DorsalNet which can accurately estimate 3D hand poses by detecting changes in the back of the hand.” 

The device has a camera and is worn on the wrist, and it acts as a 3D hand pose recognition system. The most important aspect of the device, which could be similar to a smartwatch, is that it can capture hand motions even when the environment and the device itself is moving. 

Accuracy and Preliminary Tests

The research demonstrated that the newly developed system performs better than previous attempts. Specifically, it is on average 20% more accurate in recognizing dynamic gestures. When it comes to identifying a specific 11 different grasp types, it has an accuracy rate of 75%.

The preliminary tests showed that the system could be used for smart devices control. These applications include things like changing a smartwatch’s time by just changing finger angle. Besides that, the researchers demonstrated how it could act as a virtual mouse or keyboard, allowing actions like wrist rotation to control a pointer.

According to the researchers, there will need to be more improvements for the system to be used in the real world. For example, they will need to use a more advanced camera with a higher frame rate in order to capture quicker wrist movements, as well as handle various different lightning conditions.

The research will be presented at the 33rd ACM Symposium on User Interface Software and Technology (UIST), which is set to be held virtually on October 20-23, 2020. 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.