If there is one thing the general public is familiar with when the use of artificial intelligence than it is facial recognition. Whether it is opening their mobile phone or the algorithms Facebook uses to find eyes or other parts of a face in images, facial recognition has become a standard.
But now scientists dealing with complex questions like the composition of the universe are starting to use a modified version of the ‘standard’ facial recognition in an attempt to discover how much of the dark matter there is in the universe and where it is possibly located.
As Digital Trends and Futurity note in their reports on the subject, “physicists believe that understanding this mysterious substance is necessary to explain fundamental questions about the underlying structure of the universe.”
It is the researchers gathered in Alexandre Refregier’s group at the Institute of Particle Physics and Astrophysics at ETH Zurich, Switzerland that has started to use deep neural network methods that lie behind facial recognition to develop new, special tools to attempt to discover what is still a secret of the universe for us.
As Janis Fluri, one of the researchers working on the project told Digital Trends, “The algorithm we [use] is very close to what is commonly used in facial recognition,” adding that“the beauty of A.I. is that it can learn from basically any data. In facial recognition, it learns to recognize eyes, mouths, and noses, while we are looking for structures that give us hints about dark matter. This pattern recognition is essentially the core of the algorithm. Ultimately, we only adapted it to infer the underlying cosmological parameters.”
As is explained, the scientists hypothesize that dark matter accounts for around 27% of the universe, outweighing visible matter by a ratio of approximately six to one. The theory also goes that dark matter gives the galaxies “ the extra mass they require to not tear themselves apart like a suicidal paper bag. It is what drives normal matter in the form of dust and gas to collect and assemble into stars and galaxies.”
What the researchers are looking for are the areas around the clusters of galaxies that appear warped. By using reverse-engineering “they can then isolate where they believe the densest concentrations of matter, both visible and invisible, can be found.”
Fluri and Tomasz Kacprzak, another researcher in the group explained that they trained their neural network feeding it computer-generated data that actually simulates the universe. Their repeated analysis of the dark matter maps gave them the possibility to extract ‘cosmological parameters’ from the real images of the sky.
The results they achieved by comparing them to standard methods used in this process showed a 30% improvement, based on human-made statistical analysis. As Fluri explained, “the A.I. algorithm needs a lot of data to learn in the training phase. It is very important that this training data, in our case simulations, are as accurate as possible. Otherwise, it will learn features that are not present in real data.”
After training the network they fed it actual dark matter maps obtained from KiDS-450 dataset, made using the VLT Survey Telescope (VST) in Chile. This dataset covers a total area of some 2,200 times the size of the full moon and contains records of around 15 million galaxies.
As Futurity explains, by repeatedly analyzing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information.“In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.”
AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues
The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field.
The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI’s impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year’s report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.
Affect recognition, the technical term for emotion-detection algorithms, is a rapidly growing area of AI research. Those who employ the technology to make decisions often claim that the systems can draw reliable information about people’s emotional states by analyzing microexpressions, along with other cues like tone of voice and body language. The AI Now institute notes that the technology is being employed across a wide range of applications, like determining who to hire, setting insurance prices, and monitoring if students are paying attention in class.
Prof. Kate Crawford, co-founder of AI Now explained that its often believed that human emotions can accurately be predicted with relatively simple models. Crawford said that some firms are basing the development of their software on Paul Ekman’s work, a psychologist who hypothesized there are only six basic types of emotions that register on the face. However, Crawford notes that since Ekman’s theory was introduced studies have found that is far greater variability in facial expressions and that expressions can change across situations and cultures very easily.
“At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” said Crawford to the BBC.
For this reason, the AI Now institute argues that much of affect recognition is based on unreliable theories and questionable science. Hence, emotion detection systems shouldn’t be deployed until more research has been done and that “governments should specifically prohibit the use of affect recognition in high-stakes decision-making processes”. AI Now argued that we should especially stop using the technology in “sensitive social and political contexts”, contexts that include employment, education, and policing.
At least one AI-development firm specializing in affect recognition, Emteq, agreed that there should be regulation that prevents misuse of the tech. The founder of Emteq, Charles Nduka, explained to the BBC that while AI systems can accurately recognize different facial expressions, there is not a simple map from expression to emotion. Nduka did express worry about regulation being taken too far and stifling research, noting that if “things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater”.
As NextWeb reports, AI Now also recommended a number of other policies and norms that should guide the AI industry moving forward.
AI Now highlighted the need for the AI industry to make workplaces more diverse and stated that workers should be guaranteed a right to voice their concerns about invasive and exploitative AI. Tech workers should also have the right to know if their efforts are being used to construct harmful or unethical work.
AI Now also suggested that lawmakers take steps to require informed consent for the use of any data derived from health-related AI. Beyond this, it was advised that data privacy be taken more seriously and that the states should work to design privacy laws for biometric data covering both private and public entities.
Finally, the institute advised that the AI industry begin thinking and acting more globally, trying to address the larger political, societal, and ecological consequences of AI. It was recommended that there be a substantial effort to account for AI’s impact regarding geographical displacement and climate and that governments should make the climate impact of the AI industry publically available.
Former Intelligence Professionals Use AI To Uncover Human Trafficking
Business-oriented publication Fast Company reports on recent AI developments designed to uncover human trafficking by analyzing online sex ads.
Kara Smith, a senior targeting analyst with DeliverFund, a group of former CIA, NSA, special forces, and law enforcement officers who collaborate with law enforcement to bust sex trafficking operations in the U.S. gave the publication an example of an ad she and her research colleagues analyzed. In the ad, Molly, a ‘new bunny’ in Atlanta supposedly “loves her job selling sex, domination, and striptease shows to men.”
In their analysis, Smith and her colleagues found clues that Molly is performing all these acts against her will. “For instance, she’s depicted in degrading positions, like hunched over on a bed with her rear end facing the camera.”
Smith adds other examples, like “bruises and bite marks are other telltale signs for some victims. So are tattoos that brand the women as the property of traffickers—crowns are popular images, as pimps often refer to themselves as “kings.” Photos with money being flashed around are other hallmarks of pimp showmanship.”
Until recently researchers like Smith had to spot markers like these manually. Then, approximately a year ago DeliverFund, her research group received an offer from a computer vision startup called XIX to automate the process with the use of AI.
As explained, “the company’s software scrapes images from sites used by sex traffickers and labels objects in images so experts like Smith can quickly search for and review suspect ads. Each sex ad contains an average of three photos, and XIX can scrape and analyze about 4,000 ads per minute, which is about the rate that new ones are posted online.”
After a relatively slow start in its first three years of operation, it only had three operatives, DeliverFund was able to uncover four pimps. But, after staffing up and starting its cooperation with XIX, in just the first nine months of 2019, “DeliverFund contributed to the arrests of 25 traffickers and 64 purchasers of underage sex. Over 50 victims were rescued in the process.” Among its accomplishments, it also provided assistance in the takedown of Backstage.com, “which had become the top place to advertise sex for hire—both by willing sex workers and by pimps trafficking victims.”
It is also noted that “XIX’s tool helps DeliverFund identify not only the victims of trafficking but also the traffickers. The online ads often feature personally identifiable information about the pimps themselves.”
The report explains that “XIX’s computer vision is a key tool in a digital workflow that DeliverFund uses to research abuse cases and compile what it calls intelligence reports.” Based on these reports, DeliverFund has provided intel to 63 different agencies across the U.S., but it also has a relationship with the attorney general’s offices of Montana, New Mexico, and Texas.
The organization also provides “free training to law officers on how to recognize and research abuse cases and use its digital tools. Participating agencies can research cases on their own and collaborate with other agencies, using a DeliverFund system called PATH (Platform for the Analysis and Targeting of Human Traffickers).”
According to the information of the Human Trafficking Institute, about half of trafficking victims worldwide are minors, and Smith ads that “the overwhelming majority of sex trafficking victims are U.S. citizens.”
New AI Facial Recognition Technology Goes One Step Further
It seems that the use of artificial intelligence in facial recognition technology is one that has grown the farthest so far. As ZDNet notes, so far companies like Microsoft have already developed facial recognition technology that can recognize facial expression (FR) with the use of emotion tools. But the limiting factor so far has been that these tools were limited to eight, so-called core states – anger, contempt, fear, disgust, happiness, sadness, surprise or neutral.
Now steps in Japanese tech developer Fujitsu, with AI-based technology that takes facial recognition one step further in tracking expressed emotions.
The existing FR technology is based, as ZDNet explains, on “identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions.” In a given example, “if both the AU ‘cheek raiser’ and the AU ‘lip corner puller’ are identified together, the AI can conclude that the person it is analyzing is happy.
As a Fujitsu spokesperson explained, “the issue with the current technology is that the AI needs to be trained on huge datasets for each AU. It needs to know how to recognize an AU from all possible angles and positions. But we don’t have enough images for that – so usually, it is not that accurate.”
A large amount of data needed to train AI to be effective in detecting emotions, it is very hard for the currently available FR to really recognize what the examined person is feeling. And if the person is not sitting in front of the camera and looking straight into it, the task becomes even harder. Many experts have confirmed these problems in some recent research.
Fujitsu claims it has found a solution to increase the quality of facial recognition results in detecting emotions. Instead of using a large number of images to train the AI, their newly-created tool has the task to “extract more data out of one picture.” The company calls this ‘normalization process’, which involves converting pictures “taken from a particular angle into images that resemble a frontal shot.”
As the spokesperson explained, “With the same limited dataset, we can better detect more AUs, even in pictures taken from an oblique angle, and with more AUs, we can identify complex emotions, which are more subtle than the core expressions currently analyzed.”
The company claims that now it can “detect emotional changes as elaborate as nervous laughter, with a detection accuracy rate of 81%, a number which was determined through ‘standard evaluation methods’.” In comparison, according to independent research, Microsoft tools have an accuracy rate of 60%, and also had problems with detecting emotions when it was working with pictures taken from more oblique angles.
As the potential applications, Fujitsu mentions that its new tools could be, among other things, be used for road safety “by detecting even small changes in drivers’ concentration.”