stub AI Researchers Develop Fast Method Of Calculating Confidence Intervals, Reporting When Model Shouldn't Be Trusted - Unite.AI
Connect with us

Artificial Intelligence

AI Researchers Develop Fast Method Of Calculating Confidence Intervals, Reporting When Model Shouldn’t Be Trusted

mm

Published

 on

Researchers from MIT have recently developed a technique that enables deep learning network models to rapidly calculate confidence levels, which could help data scientists and other AI users know when to trust the predictions rendered by a model.

AI systems based on artificial neural networks are responsible for more and more decisions these days, including many decisions that involve the health and safety of people. Because of this, neural networks should have some method of estimating the confidence in their outputs, enabling data scientists to tell how trustworthy their predictions are. Recently, a team of researchers from Harvard and MIT designed a quick way for neural networks to generate an indication of a model’s confidence alongside its predictions.

Deep learning models have become more and more sophisticated over the past decade, and now they can easily outperform humans on data classification tasks. Deep learning models are being used in fields where people’s health and safety can be at risk should they fail, driving autonomous vehicles and diagnosing medical conditions from scans. In these cases, it isn’t enough that a model is 99% accurate, the 1% of times that the model fails has the potential to lead to catastrophe. As a result, there needs to be a way that data scientists can determine how trustworthy any given prediction is.

There are a handful of ways that a confidence interval can be generated along with the predictions of neural networks, but traditional methods of estimating uncertainty for a neural network are fairly slow and computationally expensive. Neural networks can be incredibly large and complex, filled with billions of parameters. Just generating predictions can be computationally expensive and take a substantial amount of time, and generating a confidence level for the predictions takes even longer. Most previous methods of quantifying uncertainty have relied on sampling or running a network over and over to get an estimate of its confidence. This isn’t always feasible for applications that require high-speed traffic.

As reported by MIT News, Alexander Amini leads the combined group of researchers from MIT and Harvard, and according to Amini the method developed by their researchers accelerates the process of generating uncertainty estimates using a technique called “deep evidential regression”. Amini explained via MIT that data scientists require both high-speed models and reliable estimates of uncertainty so that untrustworthy models can be discerned. In order to preserve both the speed of the model and generate an uncertainty estimate, the researchers designed a way to estimate uncertainty from just a single run of the model.

The researchers designed the neural network model in such a way that a probabilistic distribution was generated alongside every decision. The network holds on to evidence for its decisions during the training process, generating a probability distribution based on the evidence. The evidential distribution represents the model’s confidence, and it represents uncertainty for both the model’s final decision as well as the original input data. Capturing uncertainty for both input data and decisions is important, as reducing uncertainty is dependent on knowing the source of the uncertainty.

The researchers tested their uncertainty estimation technique by applying it to a computer vision task. After the model was trained on a series of images, it generated both predictions and uncertainty estimates. The network correctly projected high uncertainty for instances where the wrong prediction was made. “It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,” Amini said regarding the model's test results.

The research team went on to conduct more tests with their network architecture. In order to stress-test the technique, they also tested the data on “out-of-distribution” data, datasets comprised of objects the network had never seen before. As expected, the network reported higher uncertainty for these unseen objects. When trained on indoor environments, the network displayed high uncertainty when tested on images from outdoor environments. The tests showed that the network could highlight when its decisions were subject to high uncertainty and should not be trusted in certain, high-risk circumstances.

The research team even reported that the network could discern when images had been doctored. When the research team altered photos with adversarial noise, the network tagged the newly altered images with high uncertainty estimates, despite the fact that the effect was too subtle to be seen by the average human observer.

If the technique proves reliable, deep evidential regression could improve the safety of AI models in general. According to Amini, deep evidential regression could help people make careful decisions when using AI models in risky situations. As Amini explained via MIT News:

“We're starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences. Any user of the method, whether it's a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision.”