stub Deep Learning System Learns Better When Distracted - Unite.AI
Connect with us

Artificial Intelligence

Deep Learning System Learns Better When Distracted

Updated on

Computer scientists from the Netherlands and Spain have determined how a deep learning system learns better when distracted. The artificial intelligence (AI) is aimed at image recognition and can learn to recognize its surroundings. The team was able to simplify the learning process after forcing the system to focus on secondary characteristics.

Convolutional Neural Networks

The deep learning system relies on convolutional neural networks (CNNs), which are a form of deep learning for AI systems. 

Estefanía Talavera Martinez is a lecturer and researcher at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence of the University of Groningen in the Netherlands.

‘These CNNs are successful, but we don't fully understand how they work', says Talavera Martinez.

Talavera Martinez has used CNNs to analyze images that come from wearable cameras while studying human behavior. Some of her work has revolved around studying human interactions with food, so she set out to make the system recognize the different settings in which people encounter food.

‘I noticed that the system made errors in the classification of some pictures and needed to know why this happened,” she said.

She made use of heat maps and analyzed which parts of the images were used by CNNs to identify the setting.

“This led to the hypothesis that the system was not looking at enough details,” she said.

One example given was that of an AI system that taught itself to use mugs to identify a kitchen. In this example, the AI could wrongly classify areas like living rooms and offices, which also often have mugs.

Talavera Martinez and her team then set out to develop a solution. Her colleagues included David Morales and Beatriz Remeseiro, both in Spain. The proposed solution was to distract the system from their primary targets.

Developing the Solution

The team trained CNNs with a standard image set of planes or cars, and they identified which parts of the images were used for classification through heat maps. These parts of the images were then blurred in the image set, and the image set was used for a second round of training. 

“This forces the system to look elsewhere for identifiers. And by using this extra information, it becomes more fine-grained in its classification,” Talavera Martinez said.

The new system worked well in the standard image sets and was successful in the images collected from wearable cameras. 

“Our training regime gives us results similar to other approaches, but is much simpler and requires less computing time,” she says.

Previous attempts to increase fine-grained classification have focused on the combination of different sets of CNNs, but the newly developed approach is far more lightweight.

“This study gave us a better idea of how these CNNs learn, and that has helped us to improve the training program,” Talavera Martinez said.

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.