It seems that the use of artificial intelligence in facial recognition technology is one that has grown the farthest so far. As ZDNet notes, so far companies like Microsoft have already developed facial recognition technology that can recognize facial expression (FR) with the use of emotion tools. But the limiting factor so far has been that these tools were limited to eight, so-called core states – anger, contempt, fear, disgust, happiness, sadness, surprise or neutral.
Now steps in Japanese tech developer Fujitsu, with AI-based technology that takes facial recognition one step further in tracking expressed emotions.
The existing FR technology is based, as ZDNet explains, on “identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions.” In a given example, “if both the AU ‘cheek raiser’ and the AU ‘lip corner puller’ are identified together, the AI can conclude that the person it is analyzing is happy.
As a Fujitsu spokesperson explained, “the issue with the current technology is that the AI needs to be trained on huge datasets for each AU. It needs to know how to recognize an AU from all possible angles and positions. But we don’t have enough images for that – so usually, it is not that accurate.”
A large amount of data needed to train AI to be effective in detecting emotions, it is very hard for the currently available FR to really recognize what the examined person is feeling. And if the person is not sitting in front of the camera and looking straight into it, the task becomes even harder. Many experts have confirmed these problems in some recent research.
Fujitsu claims it has found a solution to increase the quality of facial recognition results in detecting emotions. Instead of using a large number of images to train the AI, their newly-created tool has the task to “extract more data out of one picture.” The company calls this ‘normalization process’, which involves converting pictures “taken from a particular angle into images that resemble a frontal shot.”
As the spokesperson explained, “With the same limited dataset, we can better detect more AUs, even in pictures taken from an oblique angle, and with more AUs, we can identify complex emotions, which are more subtle than the core expressions currently analyzed.”
The company claims that now it can “detect emotional changes as elaborate as nervous laughter, with a detection accuracy rate of 81%, a number which was determined through ‘standard evaluation methods’.” In comparison, according to independent research, Microsoft tools have an accuracy rate of 60%, and also had problems with detecting emotions when it was working with pictures taken from more oblique angles.
As the potential applications, Fujitsu mentions that its new tools could be, among other things, be used for road safety “by detecting even small changes in drivers’ concentration.”
AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues
The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field.
The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI’s impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year’s report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.
Affect recognition, the technical term for emotion-detection algorithms, is a rapidly growing area of AI research. Those who employ the technology to make decisions often claim that the systems can draw reliable information about people’s emotional states by analyzing microexpressions, along with other cues like tone of voice and body language. The AI Now institute notes that the technology is being employed across a wide range of applications, like determining who to hire, setting insurance prices, and monitoring if students are paying attention in class.
Prof. Kate Crawford, co-founder of AI Now explained that its often believed that human emotions can accurately be predicted with relatively simple models. Crawford said that some firms are basing the development of their software on Paul Ekman’s work, a psychologist who hypothesized there are only six basic types of emotions that register on the face. However, Crawford notes that since Ekman’s theory was introduced studies have found that is far greater variability in facial expressions and that expressions can change across situations and cultures very easily.
“At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” said Crawford to the BBC.
For this reason, the AI Now institute argues that much of affect recognition is based on unreliable theories and questionable science. Hence, emotion detection systems shouldn’t be deployed until more research has been done and that “governments should specifically prohibit the use of affect recognition in high-stakes decision-making processes”. AI Now argued that we should especially stop using the technology in “sensitive social and political contexts”, contexts that include employment, education, and policing.
At least one AI-development firm specializing in affect recognition, Emteq, agreed that there should be regulation that prevents misuse of the tech. The founder of Emteq, Charles Nduka, explained to the BBC that while AI systems can accurately recognize different facial expressions, there is not a simple map from expression to emotion. Nduka did express worry about regulation being taken too far and stifling research, noting that if “things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater”.
As NextWeb reports, AI Now also recommended a number of other policies and norms that should guide the AI industry moving forward.
AI Now highlighted the need for the AI industry to make workplaces more diverse and stated that workers should be guaranteed a right to voice their concerns about invasive and exploitative AI. Tech workers should also have the right to know if their efforts are being used to construct harmful or unethical work.
AI Now also suggested that lawmakers take steps to require informed consent for the use of any data derived from health-related AI. Beyond this, it was advised that data privacy be taken more seriously and that the states should work to design privacy laws for biometric data covering both private and public entities.
Finally, the institute advised that the AI industry begin thinking and acting more globally, trying to address the larger political, societal, and ecological consequences of AI. It was recommended that there be a substantial effort to account for AI’s impact regarding geographical displacement and climate and that governments should make the climate impact of the AI industry publically available.
Former Intelligence Professionals Use AI To Uncover Human Trafficking
Business-oriented publication Fast Company reports on recent AI developments designed to uncover human trafficking by analyzing online sex ads.
Kara Smith, a senior targeting analyst with DeliverFund, a group of former CIA, NSA, special forces, and law enforcement officers who collaborate with law enforcement to bust sex trafficking operations in the U.S. gave the publication an example of an ad she and her research colleagues analyzed. In the ad, Molly, a ‘new bunny’ in Atlanta supposedly “loves her job selling sex, domination, and striptease shows to men.”
In their analysis, Smith and her colleagues found clues that Molly is performing all these acts against her will. “For instance, she’s depicted in degrading positions, like hunched over on a bed with her rear end facing the camera.”
Smith adds other examples, like “bruises and bite marks are other telltale signs for some victims. So are tattoos that brand the women as the property of traffickers—crowns are popular images, as pimps often refer to themselves as “kings.” Photos with money being flashed around are other hallmarks of pimp showmanship.”
Until recently researchers like Smith had to spot markers like these manually. Then, approximately a year ago DeliverFund, her research group received an offer from a computer vision startup called XIX to automate the process with the use of AI.
As explained, “the company’s software scrapes images from sites used by sex traffickers and labels objects in images so experts like Smith can quickly search for and review suspect ads. Each sex ad contains an average of three photos, and XIX can scrape and analyze about 4,000 ads per minute, which is about the rate that new ones are posted online.”
After a relatively slow start in its first three years of operation, it only had three operatives, DeliverFund was able to uncover four pimps. But, after staffing up and starting its cooperation with XIX, in just the first nine months of 2019, “DeliverFund contributed to the arrests of 25 traffickers and 64 purchasers of underage sex. Over 50 victims were rescued in the process.” Among its accomplishments, it also provided assistance in the takedown of Backstage.com, “which had become the top place to advertise sex for hire—both by willing sex workers and by pimps trafficking victims.”
It is also noted that “XIX’s tool helps DeliverFund identify not only the victims of trafficking but also the traffickers. The online ads often feature personally identifiable information about the pimps themselves.”
The report explains that “XIX’s computer vision is a key tool in a digital workflow that DeliverFund uses to research abuse cases and compile what it calls intelligence reports.” Based on these reports, DeliverFund has provided intel to 63 different agencies across the U.S., but it also has a relationship with the attorney general’s offices of Montana, New Mexico, and Texas.
The organization also provides “free training to law officers on how to recognize and research abuse cases and use its digital tools. Participating agencies can research cases on their own and collaborate with other agencies, using a DeliverFund system called PATH (Platform for the Analysis and Targeting of Human Traffickers).”
According to the information of the Human Trafficking Institute, about half of trafficking victims worldwide are minors, and Smith ads that “the overwhelming majority of sex trafficking victims are U.S. citizens.”
AI Being Used To Personalize Job Training and Education
The landscape of jobs will likely be dramatically transformed by AI in the coming years, and while some jobs will go by the wayside, other jobs will be created. It isn’t clear yet how the nature of job automation will impact the economy, whether or not more jobs will be created than displaced, but it is obvious that those who work in the positions created by AI will need training to be effective at them.
Displaced workers are going to need the training to work in the new AI-related job fields, but how can these workers be trained quickly enough to remain competitive in the workplace? The answer could be more AI, which could help personalize education and training.
Bryan Talebi is the founder and CEO of the startup Ahura AI, which aims to use AI to make online education programs more efficient, targeting them at the specific individuals using them. Talebi explained to SingularityHub that Ahura is in the process of creating a product that will take biometric data from people taking online education programs and use this data to adapt the course material to the individual’s needs.
While there are security and privacy concerns associated with the recording and analysis of an individual’s behavioral data, the trade-off would be that, in theory, people would acquire valuable skills much more quickly. By giving personalized material and instruction to learners, a learner’s individual needs and means can be accounted for. Talebi explained that Ahura AI’s prototype personalized education system is already showing some impressive results. According to Talebi, Ahura AI’s system helps people learn between three to five times faster than current education models allow.
The AI-enhanced learning system developed by Ahura works through a series of cameras and microphones. Most modern mobile devices, tablets, and laptops have cameras and microphones, so there is little additional cost of investment for users of the platform. The camera is used to track facial movements of the user, and it captures things like eye movements, fidgeting, and micro-expressions. Meanwhile, the microphone tracks voice sentiment, analyzing the learner’s word usage and tone. The idea is that these metrics can be used to detect when a learner is getting bored/disinterested or frustrated, and adjust the content to keep the learner engaged.
Talebi explained that Ahura uses the collected information to determine an optimal way to deliver the material to each student of the course. While some people might learn most easily through videos, other people will learn more easily through text, while others will learn best through experience. The primary goal of Ahura is to shift the format of the content in real-time in order to improve the information retention of the learner, which it does by delivering content that improves attention.
Because Ahura can interpret user facial expressions and body language, it can predict when a user is getting bored and about to switch away to social media. According to Talebi, Ahura is capable of predicting when someone will switch to Instagram or Facebook with a 60% confidence interval, ten-seconds out from when they switch over. Talebi acknowledges there is still a lot of work to be done, as Ahura has a goal of getting the metric up to 95% accuracy, However, he believes that the performance of Ahura shows promise.
Talebi also acknowledges a desire to utilize the same algorithms and design principles used by Twitter, Facebook, and other social media platforms, which may concern some people as these platforms are designed to be addictive. While creating a more compelling education platform is arguably a more noble goal, there’s also the issue that the platform itself could be addictive. Moreover, there’s a concern about the potential to misuse such sensitive information in general. Talebi said that Ahura is sensitive to these concerns at that they find it incredibly important that the data they collect is never misused, noting that some investors immediately began inquiring about the marketing potential of the platform.
“It’s important that we don’t use this technology in those ways. We’re aware that things can go sideways, so we’re hoping to put up guardrails to ensure our system is helping and not harming society,” Talebi said.
Talebi explained that the company wants to create an ethics board that can review the ways the data the company collects is used. Talebi said the board should be diverse in thought, gender, and background, and that it should “have teeth”, to help ensure that their software is being designed ethically.
Ahura is currently in the process of developing its alpha prototypes, and the company hopes that during beta testing it will available to over 200,000 users in a large scale trial against a control group. The company also hopes to increase the kinds of biometric data they use for their system, planning to log data from things like sleep patterns, heart rate, facial flushing, and pupil dilation.