Connect with us

Facial Recognition

AI Researchers Create 3D Video Game Face Models From User Photos

mm

Published

 on

AI Researchers Create 3D Video Game Face Models From User Photos

A team of researchers at NetEase, a Chinese gaming company, have created a system that can automatically extract faces from photos and generate in-game models with the image data. The results of the paper, entitled Face-to-Parameter Translation for Game Character Auto-Creation, were summarized by Synced on Medium.

More and more game developers are choosing to make use of AI to automate time-consuming tasks. For instance, game developers have been using AI algorithms to help render the movements of characters and objects. Another recent use of AI by game developers is creating more powerful character customization tools.

Character customization is a much-beloved feature of role-playing video games, allowing players of the game to customize their player avatars in a multitude of different ways. Many players choose to make their avatars look like themselves, which becomes more achievable as the sophistication of character customization systems increases. However, as these character creation tools become more sophisticated, they also become much more complex. Creating a character that bears a resemblance to oneself can take hours of adjusting sliders and altering cryptic parameters. The NetEase research team aims to change all that by creating a system that analyzes a photo of the player and generates a model of the player’s face on the in-game character.

The automatic character creation tool is comprised of two halves: an imitation learning system and a parameter translation system. The parameter translation system extracts features from the input image and creates parameters for the learning system to use. These parameters are then used by the imitation learning model to iteratively generate and improve on the representation of the input face.

The imitation learning system has an architecture that simulates the way the game engine creates character models with a constant style. The imitation model is designed to extract the ground-truth of the face, taking into account complex variables like beards, lipstick, eyebrows, and hairstyle. The face parameters are updated through the process of gradient descent, compared against the input. The difference between the input features and the generated model is constantly checked, and tweaks are made to the model until the in-game model aligns with the input features.

After the imitation network has been trained, the parameter translation system checks the imitation network’s outputs against the input image features, deciding on a feature space that allows the computation of optimal facial parameters.

The biggest challenge was ensuring that the 3D character models could preserve detail and appearances based on photos of humans. This is a cross-domain problem, where 3D generated images and 2D images of real people must be compared and the core features of both must be the same.

The researchers solved this problem with two different techniques. The first technique was to split up their model training into two different learning tasks: a facial content task and a discriminative task. The general shape and structure of a person’s face are discerned by minimizing the difference/loss between two global appearance values, while discriminative/fine details are filled in by minimizing the loss between things like shadows in a small region. The two different learning tasks are merged together to achieve a complete representation.

The second technique used to generate 3D models was a 3D face construction system that uses a simulated skeletal structure, taking bone shape into account. This allowed the researchers to create much more sophisticated and accurate 3D images in comparison to other 3D modeling systems that rely on grids or face meshes.

The creation of a system that can create realistic 3D models based on 2D images is impressive enough in its own right, but the automatic generation system doesn’t just work on 2D photos. The system can also take sketches and caricatures of faces and render them as 3D models with impressive accuracy.  The research team suspects that the system is able to generate accurate models based on 2D characters because the system analyzes facial semantics instead of interpreting raw pixel values.

While the automatic character generator can be used to create characters based on photos, the researchers say that users should also be able to use it as a supplementary technique and further edit the generated character according to their preferences.

Spread the love

Deep Learning Specialization on Coursera

Blogger and programmer with specialties in machine learning and deep learning topics. Daniel hopes to help others use the power of AI for social good.

Facial Recognition

AI Being Used To Personalize Job Training and Education

mm

Published

on

AI Being Used To Personalize Job Training and Education

The landscape of jobs will likely be dramatically transformed by AI in the coming years, and while some jobs will go by the wayside, other jobs will be created. It isn’t clear yet how the nature of job automation will impact the economy, whether or not more jobs will be created than displaced, but it is obvious that those who work in the positions created by AI will need training to be effective at them.

Displaced workers are going to need the training to work in the new AI-related job fields, but how can these workers be trained quickly enough to remain competitive in the workplace? The answer could be more AI, which could help personalize education and training.

Bryan Talebi is the founder and CEO of the startup Ahura AI, which aims to use AI to make online education programs more efficient, targeting them at the specific individuals using them. Talebi explained to SingularityHub that Ahura is in the process of creating a product that will take biometric data from people taking online education programs and use this data to adapt the course material to the individual’s needs.

While there are security and privacy concerns associated with the recording and analysis of an individual’s behavioral data, the trade-off would be that, in theory, people would acquire valuable skills much more quickly. By giving personalized material and instruction to learners, a learner’s individual needs and means can be accounted for. Talebi explained that Ahura AI’s prototype personalized education system is already showing some impressive results. According to Talebi, Ahura AI’s system helps people learn between three to five times faster than current education models allow.

The AI-enhanced learning system developed by Ahura works through a series of cameras and microphones. Most modern mobile devices, tablets, and laptops have cameras and microphones, so there is little additional cost of investment for users of the platform. The camera is used to track facial movements of the user, and it captures things like eye movements, fidgeting, and micro-expressions. Meanwhile, the microphone tracks voice sentiment, analyzing the learner’s word usage and tone. The idea is that these metrics can be used to detect when a learner is getting bored/disinterested or frustrated, and adjust the content to keep the learner engaged.

Talebi explained that Ahura uses the collected information to determine an optimal way to deliver the material to each student of the course. While some people might learn most easily through videos, other people will learn more easily through text, while others will learn best through experience.  The primary goal of Ahura is to shift the format of the content in real-time in order to improve the information retention of the learner, which it does by delivering content that improves attention.

Because Ahura can interpret user facial expressions and body language, it can predict when a user is getting bored and about to switch away to social media. According to Talebi, Ahura is capable of predicting when someone will switch to Instagram or Facebook with a 60% confidence interval, ten-seconds out from when they switch over. Talebi acknowledges there is still a lot of work to be done, as Ahura has a goal of getting the metric up to 95% accuracy, However, he believes that the performance of Ahura shows promise.

Talebi also acknowledges a desire to utilize the same algorithms and design principles used by Twitter, Facebook, and other social media platforms, which may concern some people as these platforms are designed to be addictive. While creating a more compelling education platform is arguably a more noble goal, there’s also the issue that the platform itself could be addictive. Moreover, there’s a concern about the potential to misuse such sensitive information in general. Talebi said that Ahura is sensitive to these concerns at that they find it incredibly important that the data they collect is never misused, noting that some investors immediately began inquiring about the marketing potential of the platform.

“It’s important that we don’t use this technology in those ways. We’re aware that things can go sideways, so we’re hoping to put up guardrails to ensure our system is helping and not harming society,” Talebi said.

Talebi explained that the company wants to create an ethics board that can review the ways the data the company collects is used. Talebi said the board should be diverse in thought, gender, and background, and that it should “have teeth”, to help ensure that their software is being designed ethically.

Ahura is currently in the process of developing its alpha prototypes, and the company hopes that during beta testing it will available to over 200,000 users in a large scale trial against a control group. The company also hopes to increase the kinds of biometric data they use for their system, planning to log data from things like sleep patterns, heart rate, facial flushing, and pupil dilation.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Facial Recognition

Scientists Use Modified Facial Recognition Techniques To Discover Dark Matter

mm

Published

on

Scientists Use Modified Facial Recognition Techniques To Discover Dark Matter

If there is one thing the general public is familiar with when the use of artificial intelligence than it is facial recognition. Whether it is opening their mobile phone or the algorithms Facebook uses to find eyes or other parts of a face in images, facial recognition has become a standard.

But now scientists dealing with complex questions like the composition of the universe are starting to use a modified version of the ‘standard’ facial recognition in an attempt to discover how much of the dark matter there is in the universe and where it is possibly located.

As Digital Trends and Futurity note in their reports on the subject, “physicists believe that understanding this mysterious substance is necessary to explain fundamental questions about the underlying structure of the universe.”

It is the researchers gathered in Alexandre Refregier’s group at the Institute of Particle Physics and Astrophysics at ETH Zurich, Switzerland that has started to use deep neural network methods that lie behind facial recognition to develop new, special tools to attempt to discover what is still a secret of the universe for us.

As Janis Fluri, one of the researchers working on the project told Digital Trends, “The algorithm we [use] is very close to what is commonly used in facial recognition,” adding that“the beauty of A.I. is that it can learn from basically any data. In facial recognition, it learns to recognize eyes, mouths, and noses, while we are looking for structures that give us hints about dark matter. This pattern recognition is essentially the core of the algorithm. Ultimately, we only adapted it to infer the underlying cosmological parameters.”

As is explained, the scientists hypothesize that dark matter accounts for around 27% of the universe, outweighing visible matter by a ratio of approximately six to one. The theory also goes that dark matter gives the galaxies “ the extra mass they require to not tear themselves apart like a suicidal paper bag. It is what drives normal matter in the form of dust and gas to collect and assemble into stars and galaxies.”

What the researchers are looking for are the areas around the clusters of galaxies that appear warped. By using reverse-engineering “they can then isolate where they believe the densest concentrations of matter, both visible and invisible, can be found.”

Fluri and Tomasz Kacprzak, another researcher in the group explained that they trained their neural network feeding it computer-generated data that actually simulates the universe. Their repeated analysis of the dark matter maps gave them the possibility to extract ‘cosmological parameters’ from the real images of the sky.

The results they achieved by comparing them to standard methods used in this process showed a 30% improvement, based on human-made statistical analysis. As Fluri explained, “the A.I. algorithm needs a lot of data to learn in the training phase. It is very important that this training data, in our case simulations, are as accurate as possible. Otherwise, it will learn features that are not present in real data.”

After training the network they fed it actual dark matter maps obtained from KiDS-450 dataset, made using the VLT Survey Telescope (VST) in Chile. This dataset covers a total area of some 2,200 times the size of the full moon and contains records of around 15 million galaxies.

As Futurity explains, by repeatedly analyzing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information.“In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Facial Recognition

Artificial Intelligence Recognizes Primate Faces in the Wild

Published

on

Artificial Intelligence Recognizes Primate Faces in the Wild

Scientists at the University of Oxford have created a new type of artificial intelligence software that can recognize and track the faces of individual chimpanzees that are living in the wild. This new software will help researchers and scientists to reduce the time and resources that it takes to analyze video footage of wild chimpanzees. It could also have a huge impact in the field of AI and wildlife conservation, an area that doesn’t receive equal attention. The research was published in Science Advances

Dan Schofield, researcher and DPhil student at Oxford University’s Primate Models Lab, School of Anthropology, spoke about the newly developed technology. 

“For species like chimpanzees, which have complex social lives and live for many years, getting snapshots of their behaviour from short-term field research can only tell us so much,” he said. “By harnessing the power of machine learning to unlock large video archives, it makes it feasible to measure behaviour over the long term, for example observing how the social interactions of a group change over several generations.”

The researchers developed the new artificial intelligence by using a computer model that was trained with over 10 million images from Kyoto University’s Primate Research Institute (PRI). They have a collection of videos of wild chimpanzees in Guinea, West Africa. No other software has been able to do what this one can. It is able to continuously track and recognize individuals in various different poses. It is highly accurate, even in difficult conditions like low lighting, poor image quality, and motion blur. 

Arsha Nagrani is the co-author of the study and a DPhil student at the Department of Engineering Science, University of Oxford. 

“Access to this large video archive has allowed us to use cutting edge deep neural networks to train models at a scale that was previously not possible,” says Nagrani. “Additionally, our method differs from previous primate face recognition software in that it can be applied to raw video footage with limited manual intervention or pre-processing, saving hours of time and resources.”

While the new software is currently being used with chimpanzees, there could be many more areas of benefit. It would be extremely useful in monitoring species for conservation, and it could be applied to species other than chimpanzees. This new technology will help lead to artificial intelligence being used to solve problems within the wild. 

“All our software is available open-source for the research community,” says Nagrani. “We hope that this will help researchers across other parts of the world apply the same cutting-edge techniques to their unique animal data sets. As a computer vision researcher, it is extremely satisfying to see these methods applied to solve real, challenging biodiversity problems.”

“With an increasing biodiversity crisis and many of the world’s ecosystems under threat, the ability to closely monitor different species and populations using automated systems will be crucial for conservation efforts, as well as animal behaviour research,” Schofield says. “Interdisciplinary collaborations like this have huge potential to make an impact, by finding novel solutions for old problems, and asking biological questions which were previously not feasible on a large scale.”

This new technology and software is extremely important for a variety of reasons. Not only will it play a huge role in some of society’s most pressing current problems like conservation and environmental protection, but it can also change the way we think of artificial intelligence. As of right now, almost all of the talk surrounding AI is focused on human applications. There are constant developments in the medical field, AI-human interface, consumer technology, war, and much more, but the areas of wildlife protection and animal behavior studies have not received the same amount of attention. These are areas that AI will benefit greatly, and these new developments could help direct some of the attention there.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading