stub Simulated Eye Movement Helps Train Metaverse - Unite.AI
Connect with us

Augmented Reality

Simulated Eye Movement Helps Train Metaverse

Published

 on

Computer engineers at Duke University have developed virtual eyes that can simulate how humans view the world. The virtual eyes are so accurate that they can be used to train virtual reality and augmented reality programs. They will prove incredibly beneficial to developers looking to create applications in the metaverse.

The results are set to be presented on May 4-6 at the International Conference on Information Processing in Sensor Networks (PSN). 

The new virtual eyes are called EyeSyn. 

Training Algorithms to Work Like Eyes

Maria Gorlatova is the Nortel Networks Assistant Professor of Electrical and Computer Engineering at Duke. 

“If you're interested in detecting whether a person is reading a comic book or advanced literature by looking at their eyes alone, you can do that,” Gorlatova said. 

“But training that kind of algorithm requires data from hundreds of people wearing headsets for hours at a time,” Gorlatova continued. “We wanted to develop software that not only reduces the privacy concerns that come with gathering this sort of data, but also allows smaller companies who don't have those levels of resources to get into the metaverse game.”

Human eyes can do many things, such as indicating whether we’re bored or excited, where concentration is focused, or if we’re an expert in a given task. 

“Where you're prioritizing your vision says a lot about you as a person, too,” Gorlatova said. “It can inadvertently reveal sexual and racial biases, interests that we don't want others to know about, and information that we may not even know about ourselves.”

Eye movement data is extremely useful for companies building platforms and software in the metaverse. It can enable developers to tailor content to engagement responses or reduce resolution in their peripheral vision, which can save computational power. 

The team of computer scientists, which included former postdoctoral associate Guohao Lan and current PhD student Tim Scargill, set out to develop the virtual eyes to mimic how an average human responds to a variety of stimuli sounds. To do this, they looked at the cognitive science literature exploring how humans see the world and process virtual information. 

Lan is now an assistant professor at the Delft University of Technology in the Netherlands. 

“If you give EyeSyn a lot of different inputs and run it enough times, you'll create a data set of synthetic eye movements that is large enough to train a (machine learning) classifier for a new program,” Gorlatova said.

Testing the System

The researchers tested the accuracy of the synthetic eyes with publicly available data. The eyes were first put to analyze videos of Dr. Anthony Fauci addressing the media during press conferences. The team then compared it to data from the eye movements of actual viewers. They also compared a virtual dataset of the synthetic eyes looking at art to actual datasets that were collected from people looking through a virtual art museum. The results demonstrated that EyeSyn can closely match the distinct patterns of actual gaze signals and simulate the different ways people’s eyes react.

Gorlatova says that these results suggest that the virtual eyes are good enough for companies to use as a baseline to train new metaverse platforms and software. 

“The synthetic data alone isn't perfect, but it's a good starting point,” Gorlatova said. “Smaller companies can use it rather than spending the time and money of trying to build their own real-world datasets (with human subjects). And because the personalization of the algorithms can be done on local systems, people don't have to worry about their private eye movement data becoming part of a large database.”

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.