stub AI Researchers Create 3D Video Game Face Models From User Photos - Unite.AI
Connect with us

Artificial Intelligence

AI Researchers Create 3D Video Game Face Models From User Photos

mm
Updated on

A team of researchers at NetEase, a Chinese gaming company, have created a system that can automatically extract faces from photos and generate in-game models with the image data. The results of the paper, entitled Face-to-Parameter Translation for Game Character Auto-Creation, were summarized by Synced on Medium.

More and more game developers are choosing to make use of AI to automate time-consuming tasks. For instance, game developers have been using AI algorithms to help render the movements of characters and objects. Another recent use of AI by game developers is creating more powerful character customization tools.

Character customization is a much-beloved feature of role-playing video games, allowing players of the game to customize their player avatars in a multitude of different ways. Many players choose to make their avatars look like themselves, which becomes more achievable as the sophistication of character customization systems increases. However, as these character creation tools become more sophisticated, they also become much more complex. Creating a character that bears a resemblance to oneself can take hours of adjusting sliders and altering cryptic parameters. The NetEase research team aims to change all that by creating a system that analyzes a photo of the player and generates a model of the player’s face on the in-game character.

The automatic character creation tool is comprised of two halves: an imitation learning system and a parameter translation system. The parameter translation system extracts features from the input image and creates parameters for the learning system to use. These parameters are then used by the imitation learning model to iteratively generate and improve on the representation of the input face.

The imitation learning system has an architecture that simulates the way the game engine creates character models with a constant style. The imitation model is designed to extract the ground-truth of the face, taking into account complex variables like beards, lipstick, eyebrows, and hairstyle. The face parameters are updated through the process of gradient descent, compared against the input. The difference between the input features and the generated model is constantly checked, and tweaks are made to the model until the in-game model aligns with the input features.

After the imitation network has been trained, the parameter translation system checks the imitation network’s outputs against the input image features, deciding on a feature space that allows the computation of optimal facial parameters.

The biggest challenge was ensuring that the 3D character models could preserve detail and appearances based on photos of humans. This is a cross-domain problem, where 3D generated images and 2D images of real people must be compared and the core features of both must be the same.

The researchers solved this problem with two different techniques. The first technique was to split up their model training into two different learning tasks: a facial content task and a discriminative task. The general shape and structure of a person’s face are discerned by minimizing the difference/loss between two global appearance values, while discriminative/fine details are filled in by minimizing the loss between things like shadows in a small region. The two different learning tasks are merged together to achieve a complete representation.

The second technique used to generate 3D models was a 3D face construction system that uses a simulated skeletal structure, taking bone shape into account. This allowed the researchers to create much more sophisticated and accurate 3D images in comparison to other 3D modeling systems that rely on grids or face meshes.

The creation of a system that can create realistic 3D models based on 2D images is impressive enough in its own right, but the automatic generation system doesn’t just work on 2D photos. The system can also take sketches and caricatures of faces and render them as 3D models with impressive accuracy.  The research team suspects that the system is able to generate accurate models based on 2D characters because the system analyzes facial semantics instead of interpreting raw pixel values.

While the automatic character generator can be used to create characters based on photos, the researchers say that users should also be able to use it as a supplementary technique and further edit the generated character according to their preferences.