stub Common Assumptions on Machine Learning Malfunctions Could be Wrong - Unite.AI
Connect with us

Artificial Intelligence

Common Assumptions on Machine Learning Malfunctions Could be Wrong

Updated on

Deep neural networks are one of the most fundamental aspects of artificial intelligence (AI), as they are used to process images and data through mathematical modeling. They are responsible for some of the greatest advancements in the field, but they also malfunction in various ways. These malfunctions can have either a small to non-existent impact, such as a simple misidentification, to a more dramatic and deadly one, such as a self-driving malfunction.

New research coming out of the University of Houston suggests that our common assumptions on these malfunctions may be wrong, which could help evaluate the reliability of the networks in the future.

The paper was published in Nature Machine Intelligence in November.

“Adversarial Examples”

Machine learning and other types of AI are crucial in many sectors and tasks, such as banking and cybersecurity systems. According to Cameron Buckner, an associate professor of philosophy at UH, there must be an understanding of the failures brought on by “adversarial examples.” These adversarial examples occur when a deep neural network system misjudges images and other data when it comes across information outside the training inputs that were used to develop the network.

The adversarial examples are rare since many times they are created or discovered by another machine learning network.

“Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are,” Buckner wrote.

Buckner is saying that the malfunction could be caused by the interaction between the actual patterns involved and what the network sets out to process, meaning it is not a complete mistake.

Patterns as Artifacts

“Understanding the implications of adversarial examples requires exploring a third possibility: that at least some of these patterns are artifacts,” Buckner said. “Thus, there are presently both costs in simply discarding these patterns and dangers in using them naively.”

Although it is not the case all of the time, intentional malfeasance is the highest risk regarding these adversarial events causing machine learning malfunctions.

“It means malicious actors could fool systems that rely on an otherwise reliable network,” Buckner said. “That has security applications.”

This could be hackers breaching a security system based upon facial recognition technology, or mislabeled traffic signs to confuse autonomous vehicles.

Other previous research has demonstrated that some of the adversarial examples are naturally occurring, taking place when a machine learning system misinterprets data through an unanticipated interaction, which is different than through errors in the data. These naturally occurring examples are rare, and the only current way to discover them is through AI.

However, Buckner says that researchers need to rethink the ways in which they address anomalies.

These anomalies, or artifacts, are explained by Buckner through the analogy of a lens flare in a photograph, which is not caused by a defect in the camera lens but rather the interaction of light with the camera.

If one knows how to interpret the lens flair, important information such as the location of the sun can be extracted. Because of this, Buckner thinks it’s possible to extract equally valuable information from adverse events in machine learning that are caused by the artifact.

Buckner also says that all of this does not automatically mean deep learning isn’t valid.

“Some of these adversarial events could be artifacts,” he said. “We have to know what these artifacts are so we can know how reliable the networks are.”

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.