Scientists at the University of Oxford have created a new type of artificial intelligence software that can recognize and track the faces of individual chimpanzees that are living in the wild. This new software will help researchers and scientists to reduce the time and resources that it takes to analyze video footage of wild chimpanzees. It could also have a huge impact in the field of AI and wildlife conservation, an area that doesn’t receive equal attention. The research was published in Science Advances.
Dan Schofield, researcher and DPhil student at Oxford University’s Primate Models Lab, School of Anthropology, spoke about the newly developed technology.
“For species like chimpanzees, which have complex social lives and live for many years, getting snapshots of their behaviour from short-term field research can only tell us so much,” he said. “By harnessing the power of machine learning to unlock large video archives, it makes it feasible to measure behaviour over the long term, for example observing how the social interactions of a group change over several generations.”
The researchers developed the new artificial intelligence by using a computer model that was trained with over 10 million images from Kyoto University’s Primate Research Institute (PRI). They have a collection of videos of wild chimpanzees in Guinea, West Africa. No other software has been able to do what this one can. It is able to continuously track and recognize individuals in various different poses. It is highly accurate, even in difficult conditions like low lighting, poor image quality, and motion blur.
Arsha Nagrani is the co-author of the study and a DPhil student at the Department of Engineering Science, University of Oxford.
“Access to this large video archive has allowed us to use cutting edge deep neural networks to train models at a scale that was previously not possible,” says Nagrani. “Additionally, our method differs from previous primate face recognition software in that it can be applied to raw video footage with limited manual intervention or pre-processing, saving hours of time and resources.”
While the new software is currently being used with chimpanzees, there could be many more areas of benefit. It would be extremely useful in monitoring species for conservation, and it could be applied to species other than chimpanzees. This new technology will help lead to artificial intelligence being used to solve problems within the wild.
“All our software is available open-source for the research community,” says Nagrani. “We hope that this will help researchers across other parts of the world apply the same cutting-edge techniques to their unique animal data sets. As a computer vision researcher, it is extremely satisfying to see these methods applied to solve real, challenging biodiversity problems.”
“With an increasing biodiversity crisis and many of the world’s ecosystems under threat, the ability to closely monitor different species and populations using automated systems will be crucial for conservation efforts, as well as animal behaviour research,” Schofield says. “Interdisciplinary collaborations like this have huge potential to make an impact, by finding novel solutions for old problems, and asking biological questions which were previously not feasible on a large scale.”
This new technology and software is extremely important for a variety of reasons. Not only will it play a huge role in some of society’s most pressing current problems like conservation and environmental protection, but it can also change the way we think of artificial intelligence. As of right now, almost all of the talk surrounding AI is focused on human applications. There are constant developments in the medical field, AI-human interface, consumer technology, war, and much more, but the areas of wildlife protection and animal behavior studies have not received the same amount of attention. These are areas that AI will benefit greatly, and these new developments could help direct some of the attention there.
AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues
The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field.
The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI’s impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year’s report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.
Affect recognition, the technical term for emotion-detection algorithms, is a rapidly growing area of AI research. Those who employ the technology to make decisions often claim that the systems can draw reliable information about people’s emotional states by analyzing microexpressions, along with other cues like tone of voice and body language. The AI Now institute notes that the technology is being employed across a wide range of applications, like determining who to hire, setting insurance prices, and monitoring if students are paying attention in class.
Prof. Kate Crawford, co-founder of AI Now explained that its often believed that human emotions can accurately be predicted with relatively simple models. Crawford said that some firms are basing the development of their software on Paul Ekman’s work, a psychologist who hypothesized there are only six basic types of emotions that register on the face. However, Crawford notes that since Ekman’s theory was introduced studies have found that is far greater variability in facial expressions and that expressions can change across situations and cultures very easily.
“At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” said Crawford to the BBC.
For this reason, the AI Now institute argues that much of affect recognition is based on unreliable theories and questionable science. Hence, emotion detection systems shouldn’t be deployed until more research has been done and that “governments should specifically prohibit the use of affect recognition in high-stakes decision-making processes”. AI Now argued that we should especially stop using the technology in “sensitive social and political contexts”, contexts that include employment, education, and policing.
At least one AI-development firm specializing in affect recognition, Emteq, agreed that there should be regulation that prevents misuse of the tech. The founder of Emteq, Charles Nduka, explained to the BBC that while AI systems can accurately recognize different facial expressions, there is not a simple map from expression to emotion. Nduka did express worry about regulation being taken too far and stifling research, noting that if “things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater”.
As NextWeb reports, AI Now also recommended a number of other policies and norms that should guide the AI industry moving forward.
AI Now highlighted the need for the AI industry to make workplaces more diverse and stated that workers should be guaranteed a right to voice their concerns about invasive and exploitative AI. Tech workers should also have the right to know if their efforts are being used to construct harmful or unethical work.
AI Now also suggested that lawmakers take steps to require informed consent for the use of any data derived from health-related AI. Beyond this, it was advised that data privacy be taken more seriously and that the states should work to design privacy laws for biometric data covering both private and public entities.
Finally, the institute advised that the AI industry begin thinking and acting more globally, trying to address the larger political, societal, and ecological consequences of AI. It was recommended that there be a substantial effort to account for AI’s impact regarding geographical displacement and climate and that governments should make the climate impact of the AI industry publically available.
Former Intelligence Professionals Use AI To Uncover Human Trafficking
Business-oriented publication Fast Company reports on recent AI developments designed to uncover human trafficking by analyzing online sex ads.
Kara Smith, a senior targeting analyst with DeliverFund, a group of former CIA, NSA, special forces, and law enforcement officers who collaborate with law enforcement to bust sex trafficking operations in the U.S. gave the publication an example of an ad she and her research colleagues analyzed. In the ad, Molly, a ‘new bunny’ in Atlanta supposedly “loves her job selling sex, domination, and striptease shows to men.”
In their analysis, Smith and her colleagues found clues that Molly is performing all these acts against her will. “For instance, she’s depicted in degrading positions, like hunched over on a bed with her rear end facing the camera.”
Smith adds other examples, like “bruises and bite marks are other telltale signs for some victims. So are tattoos that brand the women as the property of traffickers—crowns are popular images, as pimps often refer to themselves as “kings.” Photos with money being flashed around are other hallmarks of pimp showmanship.”
Until recently researchers like Smith had to spot markers like these manually. Then, approximately a year ago DeliverFund, her research group received an offer from a computer vision startup called XIX to automate the process with the use of AI.
As explained, “the company’s software scrapes images from sites used by sex traffickers and labels objects in images so experts like Smith can quickly search for and review suspect ads. Each sex ad contains an average of three photos, and XIX can scrape and analyze about 4,000 ads per minute, which is about the rate that new ones are posted online.”
After a relatively slow start in its first three years of operation, it only had three operatives, DeliverFund was able to uncover four pimps. But, after staffing up and starting its cooperation with XIX, in just the first nine months of 2019, “DeliverFund contributed to the arrests of 25 traffickers and 64 purchasers of underage sex. Over 50 victims were rescued in the process.” Among its accomplishments, it also provided assistance in the takedown of Backstage.com, “which had become the top place to advertise sex for hire—both by willing sex workers and by pimps trafficking victims.”
It is also noted that “XIX’s tool helps DeliverFund identify not only the victims of trafficking but also the traffickers. The online ads often feature personally identifiable information about the pimps themselves.”
The report explains that “XIX’s computer vision is a key tool in a digital workflow that DeliverFund uses to research abuse cases and compile what it calls intelligence reports.” Based on these reports, DeliverFund has provided intel to 63 different agencies across the U.S., but it also has a relationship with the attorney general’s offices of Montana, New Mexico, and Texas.
The organization also provides “free training to law officers on how to recognize and research abuse cases and use its digital tools. Participating agencies can research cases on their own and collaborate with other agencies, using a DeliverFund system called PATH (Platform for the Analysis and Targeting of Human Traffickers).”
According to the information of the Human Trafficking Institute, about half of trafficking victims worldwide are minors, and Smith ads that “the overwhelming majority of sex trafficking victims are U.S. citizens.”
New AI Facial Recognition Technology Goes One Step Further
It seems that the use of artificial intelligence in facial recognition technology is one that has grown the farthest so far. As ZDNet notes, so far companies like Microsoft have already developed facial recognition technology that can recognize facial expression (FR) with the use of emotion tools. But the limiting factor so far has been that these tools were limited to eight, so-called core states – anger, contempt, fear, disgust, happiness, sadness, surprise or neutral.
Now steps in Japanese tech developer Fujitsu, with AI-based technology that takes facial recognition one step further in tracking expressed emotions.
The existing FR technology is based, as ZDNet explains, on “identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions.” In a given example, “if both the AU ‘cheek raiser’ and the AU ‘lip corner puller’ are identified together, the AI can conclude that the person it is analyzing is happy.
As a Fujitsu spokesperson explained, “the issue with the current technology is that the AI needs to be trained on huge datasets for each AU. It needs to know how to recognize an AU from all possible angles and positions. But we don’t have enough images for that – so usually, it is not that accurate.”
A large amount of data needed to train AI to be effective in detecting emotions, it is very hard for the currently available FR to really recognize what the examined person is feeling. And if the person is not sitting in front of the camera and looking straight into it, the task becomes even harder. Many experts have confirmed these problems in some recent research.
Fujitsu claims it has found a solution to increase the quality of facial recognition results in detecting emotions. Instead of using a large number of images to train the AI, their newly-created tool has the task to “extract more data out of one picture.” The company calls this ‘normalization process’, which involves converting pictures “taken from a particular angle into images that resemble a frontal shot.”
As the spokesperson explained, “With the same limited dataset, we can better detect more AUs, even in pictures taken from an oblique angle, and with more AUs, we can identify complex emotions, which are more subtle than the core expressions currently analyzed.”
The company claims that now it can “detect emotional changes as elaborate as nervous laughter, with a detection accuracy rate of 81%, a number which was determined through ‘standard evaluation methods’.” In comparison, according to independent research, Microsoft tools have an accuracy rate of 60%, and also had problems with detecting emotions when it was working with pictures taken from more oblique angles.
As the potential applications, Fujitsu mentions that its new tools could be, among other things, be used for road safety “by detecting even small changes in drivers’ concentration.”
- New Research Shows How AI Can Act as Mediators
- Facial Expressions Of Mice Analyze With Artificial Intelligence
- Shell Begins to Reskill Workers in Artificial Intelligence
- Anthony Macciola, Chief Innovation Officer at ABBYY – Interview Series
- Brain Implants and AI Model Used To Translate Thought Into Text