A team of researchers at NetEase, a Chinese gaming company, have created a system that can automatically extract faces from photos and generate in-game models with the image data. The results of the paper, entitled Face-to-Parameter Translation for Game Character Auto-Creation, were summarized by Synced on Medium.
More and more game developers are choosing to make use of AI to automate time-consuming tasks. For instance, game developers have been using AI algorithms to help render the movements of characters and objects. Another recent use of AI by game developers is creating more powerful character customization tools.
Character customization is a much-beloved feature of role-playing video games, allowing players of the game to customize their player avatars in a multitude of different ways. Many players choose to make their avatars look like themselves, which becomes more achievable as the sophistication of character customization systems increases. However, as these character creation tools become more sophisticated, they also become much more complex. Creating a character that bears a resemblance to oneself can take hours of adjusting sliders and altering cryptic parameters. The NetEase research team aims to change all that by creating a system that analyzes a photo of the player and generates a model of the player’s face on the in-game character.
The automatic character creation tool is comprised of two halves: an imitation learning system and a parameter translation system. The parameter translation system extracts features from the input image and creates parameters for the learning system to use. These parameters are then used by the imitation learning model to iteratively generate and improve on the representation of the input face.
The imitation learning system has an architecture that simulates the way the game engine creates character models with a constant style. The imitation model is designed to extract the ground-truth of the face, taking into account complex variables like beards, lipstick, eyebrows, and hairstyle. The face parameters are updated through the process of gradient descent, compared against the input. The difference between the input features and the generated model is constantly checked, and tweaks are made to the model until the in-game model aligns with the input features.
After the imitation network has been trained, the parameter translation system checks the imitation network’s outputs against the input image features, deciding on a feature space that allows the computation of optimal facial parameters.
The biggest challenge was ensuring that the 3D character models could preserve detail and appearances based on photos of humans. This is a cross-domain problem, where 3D generated images and 2D images of real people must be compared and the core features of both must be the same.
The researchers solved this problem with two different techniques. The first technique was to split up their model training into two different learning tasks: a facial content task and a discriminative task. The general shape and structure of a person’s face are discerned by minimizing the difference/loss between two global appearance values, while discriminative/fine details are filled in by minimizing the loss between things like shadows in a small region. The two different learning tasks are merged together to achieve a complete representation.
The second technique used to generate 3D models was a 3D face construction system that uses a simulated skeletal structure, taking bone shape into account. This allowed the researchers to create much more sophisticated and accurate 3D images in comparison to other 3D modeling systems that rely on grids or face meshes.
The creation of a system that can create realistic 3D models based on 2D images is impressive enough in its own right, but the automatic generation system doesn’t just work on 2D photos. The system can also take sketches and caricatures of faces and render them as 3D models with impressive accuracy. The research team suspects that the system is able to generate accurate models based on 2D characters because the system analyzes facial semantics instead of interpreting raw pixel values.
While the automatic character generator can be used to create characters based on photos, the researchers say that users should also be able to use it as a supplementary technique and further edit the generated character according to their preferences.
AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues
The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field.
The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI’s impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year’s report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.
Affect recognition, the technical term for emotion-detection algorithms, is a rapidly growing area of AI research. Those who employ the technology to make decisions often claim that the systems can draw reliable information about people’s emotional states by analyzing microexpressions, along with other cues like tone of voice and body language. The AI Now institute notes that the technology is being employed across a wide range of applications, like determining who to hire, setting insurance prices, and monitoring if students are paying attention in class.
Prof. Kate Crawford, co-founder of AI Now explained that its often believed that human emotions can accurately be predicted with relatively simple models. Crawford said that some firms are basing the development of their software on Paul Ekman’s work, a psychologist who hypothesized there are only six basic types of emotions that register on the face. However, Crawford notes that since Ekman’s theory was introduced studies have found that is far greater variability in facial expressions and that expressions can change across situations and cultures very easily.
“At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” said Crawford to the BBC.
For this reason, the AI Now institute argues that much of affect recognition is based on unreliable theories and questionable science. Hence, emotion detection systems shouldn’t be deployed until more research has been done and that “governments should specifically prohibit the use of affect recognition in high-stakes decision-making processes”. AI Now argued that we should especially stop using the technology in “sensitive social and political contexts”, contexts that include employment, education, and policing.
At least one AI-development firm specializing in affect recognition, Emteq, agreed that there should be regulation that prevents misuse of the tech. The founder of Emteq, Charles Nduka, explained to the BBC that while AI systems can accurately recognize different facial expressions, there is not a simple map from expression to emotion. Nduka did express worry about regulation being taken too far and stifling research, noting that if “things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater”.
As NextWeb reports, AI Now also recommended a number of other policies and norms that should guide the AI industry moving forward.
AI Now highlighted the need for the AI industry to make workplaces more diverse and stated that workers should be guaranteed a right to voice their concerns about invasive and exploitative AI. Tech workers should also have the right to know if their efforts are being used to construct harmful or unethical work.
AI Now also suggested that lawmakers take steps to require informed consent for the use of any data derived from health-related AI. Beyond this, it was advised that data privacy be taken more seriously and that the states should work to design privacy laws for biometric data covering both private and public entities.
Finally, the institute advised that the AI industry begin thinking and acting more globally, trying to address the larger political, societal, and ecological consequences of AI. It was recommended that there be a substantial effort to account for AI’s impact regarding geographical displacement and climate and that governments should make the climate impact of the AI industry publically available.
Former Intelligence Professionals Use AI To Uncover Human Trafficking
Business-oriented publication Fast Company reports on recent AI developments designed to uncover human trafficking by analyzing online sex ads.
Kara Smith, a senior targeting analyst with DeliverFund, a group of former CIA, NSA, special forces, and law enforcement officers who collaborate with law enforcement to bust sex trafficking operations in the U.S. gave the publication an example of an ad she and her research colleagues analyzed. In the ad, Molly, a ‘new bunny’ in Atlanta supposedly “loves her job selling sex, domination, and striptease shows to men.”
In their analysis, Smith and her colleagues found clues that Molly is performing all these acts against her will. “For instance, she’s depicted in degrading positions, like hunched over on a bed with her rear end facing the camera.”
Smith adds other examples, like “bruises and bite marks are other telltale signs for some victims. So are tattoos that brand the women as the property of traffickers—crowns are popular images, as pimps often refer to themselves as “kings.” Photos with money being flashed around are other hallmarks of pimp showmanship.”
Until recently researchers like Smith had to spot markers like these manually. Then, approximately a year ago DeliverFund, her research group received an offer from a computer vision startup called XIX to automate the process with the use of AI.
As explained, “the company’s software scrapes images from sites used by sex traffickers and labels objects in images so experts like Smith can quickly search for and review suspect ads. Each sex ad contains an average of three photos, and XIX can scrape and analyze about 4,000 ads per minute, which is about the rate that new ones are posted online.”
After a relatively slow start in its first three years of operation, it only had three operatives, DeliverFund was able to uncover four pimps. But, after staffing up and starting its cooperation with XIX, in just the first nine months of 2019, “DeliverFund contributed to the arrests of 25 traffickers and 64 purchasers of underage sex. Over 50 victims were rescued in the process.” Among its accomplishments, it also provided assistance in the takedown of Backstage.com, “which had become the top place to advertise sex for hire—both by willing sex workers and by pimps trafficking victims.”
It is also noted that “XIX’s tool helps DeliverFund identify not only the victims of trafficking but also the traffickers. The online ads often feature personally identifiable information about the pimps themselves.”
The report explains that “XIX’s computer vision is a key tool in a digital workflow that DeliverFund uses to research abuse cases and compile what it calls intelligence reports.” Based on these reports, DeliverFund has provided intel to 63 different agencies across the U.S., but it also has a relationship with the attorney general’s offices of Montana, New Mexico, and Texas.
The organization also provides “free training to law officers on how to recognize and research abuse cases and use its digital tools. Participating agencies can research cases on their own and collaborate with other agencies, using a DeliverFund system called PATH (Platform for the Analysis and Targeting of Human Traffickers).”
According to the information of the Human Trafficking Institute, about half of trafficking victims worldwide are minors, and Smith ads that “the overwhelming majority of sex trafficking victims are U.S. citizens.”
New AI Facial Recognition Technology Goes One Step Further
It seems that the use of artificial intelligence in facial recognition technology is one that has grown the farthest so far. As ZDNet notes, so far companies like Microsoft have already developed facial recognition technology that can recognize facial expression (FR) with the use of emotion tools. But the limiting factor so far has been that these tools were limited to eight, so-called core states – anger, contempt, fear, disgust, happiness, sadness, surprise or neutral.
Now steps in Japanese tech developer Fujitsu, with AI-based technology that takes facial recognition one step further in tracking expressed emotions.
The existing FR technology is based, as ZDNet explains, on “identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions.” In a given example, “if both the AU ‘cheek raiser’ and the AU ‘lip corner puller’ are identified together, the AI can conclude that the person it is analyzing is happy.
As a Fujitsu spokesperson explained, “the issue with the current technology is that the AI needs to be trained on huge datasets for each AU. It needs to know how to recognize an AU from all possible angles and positions. But we don’t have enough images for that – so usually, it is not that accurate.”
A large amount of data needed to train AI to be effective in detecting emotions, it is very hard for the currently available FR to really recognize what the examined person is feeling. And if the person is not sitting in front of the camera and looking straight into it, the task becomes even harder. Many experts have confirmed these problems in some recent research.
Fujitsu claims it has found a solution to increase the quality of facial recognition results in detecting emotions. Instead of using a large number of images to train the AI, their newly-created tool has the task to “extract more data out of one picture.” The company calls this ‘normalization process’, which involves converting pictures “taken from a particular angle into images that resemble a frontal shot.”
As the spokesperson explained, “With the same limited dataset, we can better detect more AUs, even in pictures taken from an oblique angle, and with more AUs, we can identify complex emotions, which are more subtle than the core expressions currently analyzed.”
The company claims that now it can “detect emotional changes as elaborate as nervous laughter, with a detection accuracy rate of 81%, a number which was determined through ‘standard evaluation methods’.” In comparison, according to independent research, Microsoft tools have an accuracy rate of 60%, and also had problems with detecting emotions when it was working with pictures taken from more oblique angles.
As the potential applications, Fujitsu mentions that its new tools could be, among other things, be used for road safety “by detecting even small changes in drivers’ concentration.”
- Quantum Stat Releases “Big Bad NLP Database”
- Google’s CEO Calls For Increased Regulation To Avoid “Negative Consequences of AI”
- AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized
- Computer Algorithm Can Identify Unique Dancing Characteristics
- DeepMind Discovers AI Training Technique That May Also Work In Our Brains