Connect with us

Facial Recognition

AI is Moving Deeper Into Human Emotion

Published

 on

AI is Moving Deeper Into Human Emotion

Researchers at the University of Colorado and Duke University have developed a neural network to accurately decode images into 11 different human emotion categories. The research team at the universities included Phillip A. Kragel, Marianne C. Reddan, Kevin S. LaBar, and Tor D. Wagner. 

Phillip Kragel explains neural networks as computer models that are able to map input signals to an output of interest by learning a series of filters. Whenever a network is trained to detect a certain image or thing, it learns the different features that are unique to it like shape, color, and size.

The new convolutional neural network has been named EmoNet, and it was trained on visual images. The research team used a database that had 2,185 videos and included 27 different emotion categories. From the collection of videos, they extracted 137,482 frames that were divided into training and testing samples. They were not just basic emotions, but they included many complex ones as well. The different emotion categories included anxiety, awe, boredom, confusion, craving, disgust, empathetic pain, entrancement, excitement, fear, horror, interest, joy, romance, sadness, sexual desire, and surprise. 

The model was able to detect some emotions like craving and sexual desire at a high confidence interval, but it had trouble with other emotions such as confusion and surprise. To categorize the different images and emotions, the neural network used things such as color, spatial power spectra, and the presence of objects and faces in the images. 

In order to build on the research and the neural network, the team studied 18 different people and their brain activity after showing them 112 different images. After showing the real humans the images, the researchers showed the same ones to the EmoNet network to compare the results between the two. 

We already use certain apps and programs every day that read our faces and expressions for things like facial recognition, photo manipulation through AI, and to unlock our smartphones. This new development takes that a lot further with the possibility of not only reading a face’s physical features, but now reading a person’s emotions and feelings through their faces. It is an exciting but also concerning development as privacy concerns will surely arise. We already worry about facial recognition and what can happen with that data. 

Aside from the dangerous potential regarding privacy concerns, this new technological development can help in many areas. For one, many researchers often rely on participants reporting on their own emotions. Now, researchers can use the image of that participant’s face to learn their emotions. This will reduce the errors in the research and data. 

“When it comes to measuring emotions, we’re typically still limited only to asking people how they feel,” said Tor Wagner, one of the researchers on the team. “Our work can help move us towards direct measures of emotion – related brain processes.” 

This new research can also help transition mental health labels like “anxiety” to brain processes. 

“Moving away from subjective labels such as ‘anxiety’ and ‘depression’ towards brain processes could lead to new targets for therapeutics, treatments, and interventions.” said Phillip Kragel, another one of the researchers. 

This new neural network is just one of the new and exciting developments in artificial intelligence. Researchers are constantly pushing this technology further, and it will make an impact in every area of our lives. The new developments in AI are taking it deeper into the different areas of human behavior and emotion. While we mostly know of AI dealing in the physical realm including muscles, arms, and other parts of the body, we are now going into the human psyche with the technology. 

 

Spread the love

Deep Learning Specialization on Coursera

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Artifical Neural Networks

New Technique Lets AI Intuitively Understand Some Physics

mm

Published

on

New Technique Lets AI Intuitively Understand Some Physics

Artificial intelligence has been able to develop an understanding of physics through reinforcement learning for some time now, but a new technique developed by researchers at MIT could help engineers design models that demonstrate an intuitive understanding of physics.

Psychological research has shown that, to some extent, humans have an intuitive understanding of the laws of physics. Infants have expectations of how objects should interact and move, and violations of these expectations will have the infants react with surprise. The research conducted by the MIT team has the potential to not only drive new applications of artificial intelligence, but help psychologists understand how infants perceive and learn about the world.

The model designed by the MIT team is called ADEPT,  and it functions by making predictions about how objects should behave in a physical space. The model observes objects and keeps track of a “surprise” metric as it does so. If something unexpected happens, the model responds by increasing its surprise value. Unexpected and seemingly impossible actions such as an object teleporting or vanishing all together will see a dramatic rise in surprise.

The goal of the research team was to get their model to register the same levels of surprise that humans register when they see objects behaving in implausible ways.

ADEPT has two major components to it, a physics engine and an inverse graphics module. The physics engine is responsible for predicting how an object will move, predicting a future representation of an object, from a range of possible states. Meanwhile, the inverse graphics module is responsible for creating the representations of objects that will be fed into the physics engine.

The inverse graphics module tracks several different attributes such as velocity, shape, and orientation of an object, extracting this information from frames of videos. The inverse graphical module only focuses on the most salient details, ignoring details that won’t help the physics engine interpret the object and predict new states. By focusing on only the most important details, the model is better able to generalize to new objects. The physics engine then takes these object descriptions and simulates more complex physical behavior, like fluidity or rigidity, in order to make predictions about how the object should behave.

After this intake process occurs, the model observes the actual next frame in the video, which it uses to recalculate its probability distribution with respect to possible object behaviors. The surprise is inversely proportional to the probability than an event should occur, only registering great surprise when there is a major mismatch between what the model believes should happen next and what actually happens next.

The research team needed some way to compare the surprise of their model to the surprise of people observing the same object behavior. In developmental psychology, researchers often test infants by showing them two different videos. In one video, an object is presented that behaves as you would expect objects to in the real world, not spontaneous play vanishing or teleporting. In the other video and object violates the laws of physics in some fashion. The research team took these same basic concepts and had 60 adults watch 64 different videos of both expected and unexpected physical behavior. The participants were then asked to rate their surprise at various moments in the video on a scale of 1 to 100.

Analysis of the model’s performance demonstrated that it performed quite well on videos where an object was moved behind a wall and disappeared when the wall was removed, typically matching humans surprise levels in these cases. The model also appeared to be surprised by videos where humans didn’t demonstrate surprise but arguably should have. As an example, in order for an object to move behind a wall at a given speed and immediately come out on the other side of the wall, it must have either teleported or experienced a dramatic increase in speed.

When compared to the performance of traditional neural networks that are capable of learning from observation but do not explicitly log the representation of an object, researchers found that the ADEPT network was much more accurate at discriminating between surprising and unsurprising scenes and that ADEPT’s performance aligned with human reactions more closely.

The MIT research team is aiming to do more research and gain deeper insight into how infants observe the world around them and learn from these observations, incorporating their findings into new versions of the ADEPT model.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

mm

Published

on

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

Just recently, Google announced the creation of a new cloud platform intended to make gaining insight into how an AI program renders decisions, making debugging a program easier and enhancing transparency. As reported by The Register, the cloud platform is called Explainable AI, and it marks a major attempt by Google to invest in AI explainability.

Artificial neural networks are employed in many, perhaps most, of the major AI systems employed in the world today. The neural networks that run major AI applications can be extraordinarily complex and large, and as a system’s complexity grows it becomes harder and harder to intuit why a particular decision has been made by the system. As Google explains in their white paper, as AI systems become more powerful, they also become more complex and hence harder to debug. Transparency is also lost when this occurs, which means that biased algorithms can be difficult to recognize and address.

The fact that the reasoning which drives the behavior of complex systems is so hard to interpret often has drastic consequences. In addition to making it hard to combat AI bias, it can make it extraordinarily difficult to tell spurious correlations from genuinely important and interesting correlations.

Many companies and research groups are exploring how to address the “black box” problem of AI and create a system that adequately explains why certain decisions have been made by an AI. Google’s Explainable AI platform represents its own bid to tackle this challenge. Explainable AI is comprised of three different tools. The first tool a system that describes which features have been selected by an AI and it also displays an attribution score which represents the amount of influence that a particular feature has on the final prediction. Google’s report on the tool gives an example of predicting how long a bike ride will last based on variables like rainfall, current temperature, day of the week, and start time. After the network renders the decision, feedback is given that displays which features had the most impact on the predictions.

How does this tool provide such feedback in the case of image data? In this case, the tool produces an overlay that highlights the regions of the image that weighted most heavily on the rendered decision.

Another tool found in the toolkit is the “What-If” tool, which displays potential fluctuations in model performance as individual attributes are manipulated. Finally, the last tool enables can be set up to give sample results to human reviewers on a consistent schedule.

Dr. Andrew Moore, Google’s chief scientist for AI and machine learning, described the inspiration for the project. Moore explained that around five years ago the academic community started to become concerned about the harmful byproducts of AI use and that Google wanted to ensure their systems were only being used in ethical ways. Moore described an incident where the company was trying to design a computer vision program to alert construction workers if someone wasn’t wearing a helmet, but they become concerned that the monitoring could be taken too far and become dehumanizing. Moore said there was a similar reason that Google decided not to release a general face recognition API, as the company wanted to have more control over how their technology was used and ensure it was only being used in ethical ways.

Moore also highlighted why it was so important for AI’s decision to be explainable:

“If you’ve got a safety critical system or a societally important thing which may have unintended consequences if you think your model’s made a mistake, you have to be able to diagnose it. We want to explain carefully what explainability can and can’t do. It’s not a panacea.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Can Avoid Specific Unwanted Behaviors With New Algorithms

mm

Published

on

AI Can Avoid Specific Unwanted Behaviors With New Algorithms

As artificial intelligence algorithms and systems become more sophisticated and take on bigger responsibilities, it becomes more and more important to ensure that AI systems avoid dangerous, unwanted behavior. Recently a team of researchers from the University of Massachusetts Amherst and Stanford published a paper that demonstrates how specific AI behavior can be avoided, through the use of a technique that elicits precise mathematical instructions that can be used to tweak the behavior of an AI.

According to TechXplore, the research was predicated on the assumption that unfair/unsafe behaviors can be defined with mathematical functions and variables. If this is true then it should be possible for researchers to train systems to avoid these specific behaviors. The research team aimed to develop a toolkit that could be employed by users of the AI to specify which behaviors they want the AI to avoid, and enable AI engineers to reliably train a system that will avoid unwanted actions when used in real-world scenarios.

Phillip Thomas, the first author on the paper and assistant computer science professor at U of Michigan Amherst, explained that the research team aims to demonstrate that designers of machine learning algorithms can make it easier for AI utilizers to describe unwanted behaviors and have it be highly likely that the AI system will avoid the behavior.

The research team tested their technique by applying it to a common problem in data science, gender bias. The research team aimed to make the algorithms used to predict college student GPA fairer by reducing gender bias. The research team utilized an experimental dataset and instructed their AI system to avoid the creation of models that across the board underestimated /overestimated GPAs for one gender. As a result of the researcher’s instructions, the algorithm created a model that better-predicted student GPAs and had substantially less systemic gender bias than previously existing models. Previous GPA prediction models suffered from bias because bias reduction models were often too limited to be useful, or no bias reduction was used at all.

A different algorithm was also developed by the research team. This algorithm was implemented in an automated insulin pump, and the algorithm was intended to balance both performance and safety. Automated insulin pumps need to decide how large of an insulin dose a patient should be given  After eating, the pump will ideally deliver a dose of insulin just large enough to keep blood sugar levels in check. The insulin doses that are delivered must be neither too large or too small.

Machine learning algorithms are already proficient at identifying patterns in an individuals response to insulin doses, but these existing analysis methods can’t let doctors specify outcomes that should be avoided, such as low blood sugar crashes. In contrast, the research team was able to develop a method that could be trained to deliver insulin doses that stay within the two extremes, preventing either underdosing or overdosing. While, the system isn’t ready for testing in real patients just yet, a more sophisticated AI based on this approach could improve quality of life for those suffering from diabetes.

In the research paper, the researchers refer to the algorithm as a “Seledonian” algorithm. This is in reference to the three laws of robotics described by the Sci-Fi author Isaac Asimov. The implication is that the AI system “may not injure a human being or, through inaction, allow a human being to come to harm.” The research team hopes that their framework will allow AI researchers and engineers to create a variety of algorithms and systems that avoid dangerous behavior. Emma Brunskill, senior author of the paper and Stanford assistant professor of computer science, explained to TechXplore:

“We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading