stub New AI Facial Recognition Technology Goes One Step Further - Unite.AI
Connect with us

Artificial Intelligence

New AI Facial Recognition Technology Goes One Step Further

mm
Updated on

It seems that the use of artificial intelligence in facial recognition technology is one that has grown the farthest so far. As ZDNet notes, so far companies like Microsoft have already developed facial recognition technology that can recognize facial expression (FR) with the use of emotion tools. But the limiting factor so far has been that these tools were limited to eight, so-called core states –  anger, contempt, fear, disgust, happiness, sadness, surprise or neutral. 

Now steps in Japanese tech developer Fujitsu, with AI-based technology that takes facial recognition one step further in tracking expressed emotions.

The existing FR technology is based, as ZDNet explains, on “identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions.” In a given example, “if both the AU ‘cheek raiser’ and the AU ‘lip corner puller’ are identified together, the AI can conclude that the person it is analyzing is happy.

As a Fujitsu spokesperson explained, “the issue with the current technology is that the AI needs to be trained on huge datasets for each AU. It needs to know how to recognize an AU from all possible angles and positions. But we don't have enough images for that – so usually, it is not that accurate.”

A large amount of data needed to train AI to be effective in detecting emotions, it is very hard for the currently available FR to really recognize what the examined person is feeling. And if the person is not sitting in front of the camera and looking straight into it, the task becomes even harder. Many experts have confirmed these problems in some recent research.

Fujitsu claims it has found a solution to increase the quality of facial recognition results in detecting emotions. Instead of using a large number of images to train the AI, their newly-created tool has the task to “extract more data out of one picture.” The company calls this ‘normalization process’, which involves converting pictures “taken from a particular angle into images that resemble a frontal shot.”

As the spokesperson explained, “With the same limited dataset, we can better detect more AUs, even in pictures taken from an oblique angle, and with more AUs, we can identify complex emotions, which are more subtle than the core expressions currently analyzed.”

The company claims that now it can “detect emotional changes as elaborate as nervous laughter, with a detection accuracy rate of 81%, a number which was determined through ‘standard evaluation methods’.” In comparison, according to independent research, Microsoft tools have an accuracy rate of 60%, and also had problems with detecting emotions when it was working with pictures taken from more oblique angles.

As the potential applications, Fujitsu mentions that its new tools could be, among other things, be used for road safety “by detecting even small changes in drivers' concentration.”

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.