stub Research Team Aims to Create Explainable AI for Nuclear Nonproliferation and Nuclear Security - Unite.AI
Connect with us

Artificial Intelligence

Research Team Aims to Create Explainable AI for Nuclear Nonproliferation and Nuclear Security

mm
Updated on

Researchers from the Pacific Northwest National Laboratory (PNNL) are attempting to make AI explainable for the purposes of nuclear nonproliferation and national security. The goal is to make the decisions returned by AI models transparent for any decision involving nuclear security.

More attention than ever is being paid to the importance of explainable AI models, in an endeavor to solve the “black box” problem of machine learning. AI models are often trusted to make complex decisions even when those responsible for executing those decisions don’t understand the rationale behind those decisions. The higher potential for catastrophe and danger those decisions are being made in, the more important it is for the rationale behind those decisions to be transparent.

It may not be necessary to understand the reasoning behind classifications if an AI application is doing something as simple as categorizing images of fruit, but for cases involving nuclear weapons or nuclear material production, it’s better to open the black box underlying the AI employed in these scenarios.

PNNL scientists are working to make AI explainable using a variety of new techniques. These researchers are working alongside the Department of Energy’s National Nuclear Security Administration (NNSA)’s Office of Defense Nuclear Nonproliferation Research and Development (DNN R&D). The DNN R&D is responsible for the oversight of the United States’ ability to monitor and detect the production of nuclear material, the development of nuclear weapons, and the detonation of nuclear weapons around the globe.

Given how high the risks are when it comes to issues related to nuclear nonproliferation, it’s critical to know just how an AI system reaches its conclusions about these issues. Angie Sheffield is a senior program manager at DNN R&D. According to Sheffield it can often be difficult to incorporate new technologies like AI models into traditional scientific techniques and frameworks, but the process of incorporating AI into these systems can be made easier by designing new ways of interacting more effectively with these systems. Sheffield argues that researchers should create tools that enable developers to understand how these sophisticated techniques operate.

The relative scarcity of data involving nuclear explosions and nuclear weapons development means that explainable AI is even more important. Training AI models in this space results in models that may be less reliable thanks to how relatively little data there is compared to a task like face recognition. As a result, every step of the process used by the model to make a decision needs to be inspectable.

Mark Greaves, a researcher at PNNL, explained that the risks inherent in nuclear proliferation mandate a system that can inform people about why a given answer has been selected.

As Greaves explained via EurekaAlert:

“If an AI system yields a mistaken probability about whether a nation possesses a nuclear weapon, that's a problem of a different scale entirely. So our system must at least produce explanations so that humans can check its conclusions and use their own expertise to correct for AI training gaps caused by the sparsity of data.”

As explained by Sheffield, PNNL has two strengths that will help them solve this problem. First, PNNL has substantial experience in the AI field. Additionally, the team has significant domain knowledge when it comes to nuclear materials and weapons. The PNNL team understands issues like the processing of plutonium and the types of signals unique to the development of nuclear weapons. The combination of AI experience, national security experience, and nuclear domain knowledge means that PNNL is uniquely suited to handling matters of nuclear national security and AI.