Connect with us

Facial Recognition

Former Intelligence Professionals Use AI To Uncover Human Trafficking

mm

Published

 on

Former Intelligence Professionals Use AI To Uncover Human Trafficking

Business-oriented publication Fast Company reports on recent AI developments designed to uncover human trafficking by analyzing online sex ads.

Kara Smith, a senior targeting analyst with DeliverFund, a group of former CIA, NSA, special forces, and law enforcement officers who collaborate with law enforcement to bust sex trafficking operations in the U.S. gave the publication an example of an ad she and her research colleagues analyzed. In the ad, Molly, a ‘new bunny’ in Atlanta supposedly “loves her job selling sex, domination, and striptease shows to men.”

In their analysis, Smith and her colleagues found clues that Molly is performing all these acts against her will. “For instance, she’s depicted in degrading positions, like hunched over on a bed with her rear end facing the camera.”

Smith adds other examples, like “bruises and bite marks are other telltale signs for some victims. So are tattoos that brand the women as the property of traffickers—crowns are popular images, as pimps often refer to themselves as “kings.” Photos with money being flashed around are other hallmarks of pimp showmanship.”

Until recently researchers like Smith had to spot markers like these manually. Then, approximately a year ago DeliverFund, her research group received an offer from a computer vision startup called XIX to automate the process with the use of AI.

As explained, “the company’s software scrapes images from sites used by sex traffickers and labels objects in images so experts like Smith can quickly search for and review suspect ads. Each sex ad contains an average of three photos, and XIX can scrape and analyze about 4,000 ads per minute, which is about the rate that new ones are posted online.”

After a relatively slow start in its first three years of operation, it only had three operatives, DeliverFund was able to uncover four pimps. But, after staffing up and starting its cooperation with XIX, in just the first nine months of 2019, “DeliverFund contributed to the arrests of 25 traffickers and 64 purchasers of underage sex. Over 50 victims were rescued in the process.” Among its accomplishments, it also provided assistance in the takedown of Backstage.com, “which had become the top place to advertise sex for hire—both by willing sex workers and by pimps trafficking victims.”

It is also noted that “XIX’s tool helps DeliverFund identify not only the victims of trafficking but also the traffickers. The online ads often feature personally identifiable information about the pimps themselves.”

The report explains that “XIX’s computer vision is a key tool in a digital workflow that DeliverFund uses to research abuse cases and compile what it calls intelligence reports.” Based on these reports, DeliverFund has provided intel to 63 different agencies across the U.S., but it also has a relationship with the attorney general’s offices of Montana, New Mexico, and Texas.

The organization also provides “free training to law officers on how to recognize and research abuse cases and use its digital tools. Participating agencies can research cases on their own and collaborate with other agencies, using a DeliverFund system called PATH (Platform for the Analysis and Targeting of Human Traffickers).”

According to the information of the Human Trafficking Institute, about half of trafficking victims worldwide are minors, and Smith ads that “the overwhelming majority of sex trafficking victims are U.S. citizens.”

 

Spread the love

Deep Learning Specialization on Coursera

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Facial Recognition

New AI Facial Recognition Technology Goes One Step Further

mm

Published

on

New AI Facial Recognition Technology Goes One Step Further

It seems that the use of artificial intelligence in facial recognition technology is one that has grown the farthest so far. As ZDNet notes, so far companies like Microsoft have already developed facial recognition technology that can recognize facial expression (FR) with the use of emotion tools. But the limiting factor so far has been that these tools were limited to eight, so-called core states –  anger, contempt, fear, disgust, happiness, sadness, surprise or neutral. 

Now steps in Japanese tech developer Fujitsu, with AI-based technology that takes facial recognition one step further in tracking expressed emotions.

The existing FR technology is based, as ZDNet explains, on “identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions.” In a given example, “if both the AU ‘cheek raiser’ and the AU ‘lip corner puller’ are identified together, the AI can conclude that the person it is analyzing is happy.

As a Fujitsu spokesperson explained, “the issue with the current technology is that the AI needs to be trained on huge datasets for each AU. It needs to know how to recognize an AU from all possible angles and positions. But we don’t have enough images for that – so usually, it is not that accurate.”

A large amount of data needed to train AI to be effective in detecting emotions, it is very hard for the currently available FR to really recognize what the examined person is feeling. And if the person is not sitting in front of the camera and looking straight into it, the task becomes even harder. Many experts have confirmed these problems in some recent research.

Fujitsu claims it has found a solution to increase the quality of facial recognition results in detecting emotions. Instead of using a large number of images to train the AI, their newly-created tool has the task to “extract more data out of one picture.” The company calls this ‘normalization process’, which involves converting pictures “taken from a particular angle into images that resemble a frontal shot.”

As the spokesperson explained, “With the same limited dataset, we can better detect more AUs, even in pictures taken from an oblique angle, and with more AUs, we can identify complex emotions, which are more subtle than the core expressions currently analyzed.”

The company claims that now it can “detect emotional changes as elaborate as nervous laughter, with a detection accuracy rate of 81%, a number which was determined through ‘standard evaluation methods’.” In comparison, according to independent research, Microsoft tools have an accuracy rate of 60%, and also had problems with detecting emotions when it was working with pictures taken from more oblique angles.

As the potential applications, Fujitsu mentions that its new tools could be, among other things, be used for road safety “by detecting even small changes in drivers’ concentration.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Facial Recognition

AI Being Used To Personalize Job Training and Education

mm

Published

on

AI Being Used To Personalize Job Training and Education

The landscape of jobs will likely be dramatically transformed by AI in the coming years, and while some jobs will go by the wayside, other jobs will be created. It isn’t clear yet how the nature of job automation will impact the economy, whether or not more jobs will be created than displaced, but it is obvious that those who work in the positions created by AI will need training to be effective at them.

Displaced workers are going to need the training to work in the new AI-related job fields, but how can these workers be trained quickly enough to remain competitive in the workplace? The answer could be more AI, which could help personalize education and training.

Bryan Talebi is the founder and CEO of the startup Ahura AI, which aims to use AI to make online education programs more efficient, targeting them at the specific individuals using them. Talebi explained to SingularityHub that Ahura is in the process of creating a product that will take biometric data from people taking online education programs and use this data to adapt the course material to the individual’s needs.

While there are security and privacy concerns associated with the recording and analysis of an individual’s behavioral data, the trade-off would be that, in theory, people would acquire valuable skills much more quickly. By giving personalized material and instruction to learners, a learner’s individual needs and means can be accounted for. Talebi explained that Ahura AI’s prototype personalized education system is already showing some impressive results. According to Talebi, Ahura AI’s system helps people learn between three to five times faster than current education models allow.

The AI-enhanced learning system developed by Ahura works through a series of cameras and microphones. Most modern mobile devices, tablets, and laptops have cameras and microphones, so there is little additional cost of investment for users of the platform. The camera is used to track facial movements of the user, and it captures things like eye movements, fidgeting, and micro-expressions. Meanwhile, the microphone tracks voice sentiment, analyzing the learner’s word usage and tone. The idea is that these metrics can be used to detect when a learner is getting bored/disinterested or frustrated, and adjust the content to keep the learner engaged.

Talebi explained that Ahura uses the collected information to determine an optimal way to deliver the material to each student of the course. While some people might learn most easily through videos, other people will learn more easily through text, while others will learn best through experience.  The primary goal of Ahura is to shift the format of the content in real-time in order to improve the information retention of the learner, which it does by delivering content that improves attention.

Because Ahura can interpret user facial expressions and body language, it can predict when a user is getting bored and about to switch away to social media. According to Talebi, Ahura is capable of predicting when someone will switch to Instagram or Facebook with a 60% confidence interval, ten-seconds out from when they switch over. Talebi acknowledges there is still a lot of work to be done, as Ahura has a goal of getting the metric up to 95% accuracy, However, he believes that the performance of Ahura shows promise.

Talebi also acknowledges a desire to utilize the same algorithms and design principles used by Twitter, Facebook, and other social media platforms, which may concern some people as these platforms are designed to be addictive. While creating a more compelling education platform is arguably a more noble goal, there’s also the issue that the platform itself could be addictive. Moreover, there’s a concern about the potential to misuse such sensitive information in general. Talebi said that Ahura is sensitive to these concerns at that they find it incredibly important that the data they collect is never misused, noting that some investors immediately began inquiring about the marketing potential of the platform.

“It’s important that we don’t use this technology in those ways. We’re aware that things can go sideways, so we’re hoping to put up guardrails to ensure our system is helping and not harming society,” Talebi said.

Talebi explained that the company wants to create an ethics board that can review the ways the data the company collects is used. Talebi said the board should be diverse in thought, gender, and background, and that it should “have teeth”, to help ensure that their software is being designed ethically.

Ahura is currently in the process of developing its alpha prototypes, and the company hopes that during beta testing it will available to over 200,000 users in a large scale trial against a control group. The company also hopes to increase the kinds of biometric data they use for their system, planning to log data from things like sleep patterns, heart rate, facial flushing, and pupil dilation.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Facial Recognition

Scientists Use Modified Facial Recognition Techniques To Discover Dark Matter

mm

Published

on

Scientists Use Modified Facial Recognition Techniques To Discover Dark Matter

If there is one thing the general public is familiar with when the use of artificial intelligence than it is facial recognition. Whether it is opening their mobile phone or the algorithms Facebook uses to find eyes or other parts of a face in images, facial recognition has become a standard.

But now scientists dealing with complex questions like the composition of the universe are starting to use a modified version of the ‘standard’ facial recognition in an attempt to discover how much of the dark matter there is in the universe and where it is possibly located.

As Digital Trends and Futurity note in their reports on the subject, “physicists believe that understanding this mysterious substance is necessary to explain fundamental questions about the underlying structure of the universe.”

It is the researchers gathered in Alexandre Refregier’s group at the Institute of Particle Physics and Astrophysics at ETH Zurich, Switzerland that has started to use deep neural network methods that lie behind facial recognition to develop new, special tools to attempt to discover what is still a secret of the universe for us.

As Janis Fluri, one of the researchers working on the project told Digital Trends, “The algorithm we [use] is very close to what is commonly used in facial recognition,” adding that“the beauty of A.I. is that it can learn from basically any data. In facial recognition, it learns to recognize eyes, mouths, and noses, while we are looking for structures that give us hints about dark matter. This pattern recognition is essentially the core of the algorithm. Ultimately, we only adapted it to infer the underlying cosmological parameters.”

As is explained, the scientists hypothesize that dark matter accounts for around 27% of the universe, outweighing visible matter by a ratio of approximately six to one. The theory also goes that dark matter gives the galaxies “ the extra mass they require to not tear themselves apart like a suicidal paper bag. It is what drives normal matter in the form of dust and gas to collect and assemble into stars and galaxies.”

What the researchers are looking for are the areas around the clusters of galaxies that appear warped. By using reverse-engineering “they can then isolate where they believe the densest concentrations of matter, both visible and invisible, can be found.”

Fluri and Tomasz Kacprzak, another researcher in the group explained that they trained their neural network feeding it computer-generated data that actually simulates the universe. Their repeated analysis of the dark matter maps gave them the possibility to extract ‘cosmological parameters’ from the real images of the sky.

The results they achieved by comparing them to standard methods used in this process showed a 30% improvement, based on human-made statistical analysis. As Fluri explained, “the A.I. algorithm needs a lot of data to learn in the training phase. It is very important that this training data, in our case simulations, are as accurate as possible. Otherwise, it will learn features that are not present in real data.”

After training the network they fed it actual dark matter maps obtained from KiDS-450 dataset, made using the VLT Survey Telescope (VST) in Chile. This dataset covers a total area of some 2,200 times the size of the full moon and contains records of around 15 million galaxies.

As Futurity explains, by repeatedly analyzing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information.“In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading