Detection of nuclear threats with AI
New research from the Pacific Northwest National Laboratory (PNNL) uses machine learning, data analysis and artificial intelligence to identify potential nuclear threats.
PNNL nonproliferation analyst Benjamin Wilson has a unique opportunity to combine these data mining and machine learning techniques with nuclear analysis.
According to Wilson: “Preventing nuclear proliferation requires vigilance. It involves labor, from audits of nuclear materials to investigations into who is handling nuclear materials. Data analytics-driven techniques can be leveraged to make this easier.”
With support from the National Nuclear Security Administration (NNSA), the Mathematics for Artificial Reasoning in Science (MARS) Initiative, and the Department of Defense, PNNL researchers are working on several projects to improve the effectiveness of nuclear nonproliferation and security measures. Below are the main theses of some of them.
Detection of leakage of nuclear materials
Nuclear reprocessing facilities collect spent nuclear fuel and separate it into waste. The products are then used to produce compounds that can be processed into new fuel for nuclear reactors. These compounds contain uranium and plutonium and can be used to make nuclear weapons. The IAEA monitors nuclear facilities to ensure that none of the nuclear material is used for nuclear weapons. These are long-term regular inspections, as well as collecting samples for further analysis.
“We could save a lot of time and labor costs if we could create a system that detects abnormalities automatically from the facilities process data,” said Wilson.
In a study published in The International Journal of Nuclear Safeguards and Non-Proliferation, Wilson worked with researchers at Sandia National Laboratories to create a virtual replica of a reprocessing facility. They then trained an artificial intelligence model to detect patterns in the process data that represented the leakage of nuclear materials. In this simulated environment the model showed encouraging results. “Though it is unlikely that this approach would be used in the near future, our system provides a promising start to complement existing safeguards,” said Wilson.
Analyzing texts to search for signs of nuclear weapons proliferation
PNNL data scientists have developed a machine learning tool based on Google BERT: a language model trained on Wikipedia data for general queries. Language models allow computers to “understand” human languages — they can read texts and extract important information from them, including context and nuance. People can ask BERT questions, such as: "The population of Switzerland?" and get the right answer.
Although the model trained by Wikipedia is excellent at answering general questions, it lacks knowledge in the nuclear domain. So the team created AJAX, a helper to fill this knowledge gap.
“While AJAX is still in its early stages, it has the potential to save analysts many hours of working time by providing both a direct answer to queries and the evidence for that answer,” said Subramanian. The evidence is particularly intriguing to researchers because most machine learning models are often referred to as "black boxes" that leave no trace of evidence for their answers, even if they are correct. AJAX aims to provide auditability by receiving documents containing evidence.
According to Subramanian: “When the domain is as important as nuclear proliferation detection, it is critical for us to know where our information is coming from”.
This development was published in the International Journal of Nuclear Safeguards and Non-Proliferation.
Currently, IAEA analysts spend many hours reading research papers and manually analyzing reams of data that contain information on nuclear proliferation. The researchers hope that in the future it will be possible to ask AJAX questions and get not only an answer, but also a link to the source of the information. This will greatly simplify the task of analysts.
Image analysis to determine the origin of nuclear materials
Sometimes law enforcement officers come across nuclear material that is outside of regulatory control and of unknown origin. It is extremely important to find out where the material came from and where it was created. After all, there is always a possibility that the extracted sample may be only a part of the material that is in illegal circulation. Forensic analysis of nuclear materials is one of the analysis tools used in this vital work.
PNNL researchers, in collaboration with the University of Utah, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, developed a machine learning algorithm for forensic analysis of these samples. Their method uses electron microscope images to compare the microstructures of nuclear samples. Different materials contain subtle differences that can be detected using machine learning.
“Imagine that synthesizing nuclear materials was like baking cookies,” said Elizabeth Jurrus, MARS initiative lead. “Two people can use the same recipe and end up with different-looking cookies. It’s the same with nuclear materials.”
The synthesis of these materials can be influenced by many factors, such as local humidity and the purity of the starting materials. As a result, nuclear materials produced at a particular enterprise acquire a special structure — a "signature look" that can be seen in an electron microscope.
The research is published in the Journal of Nuclear Materials.
The researchers have created a library of images of various nuclear samples. They used machine learning to compare images from their library with unknown samples, and thus determine the origin of the unknowns.
This will help nuclear analysts determine the source of the material and direct further research.
It will likely take some time before agencies like the IAEA incorporate machine learning techniques into their nuclear threat detection methods. However, such research can definitely influence this process and optimize it.
“Though we don’t expect machine learning to replace anyone’s job, we see it as a way to make their jobs easier,” — the researchers say. “We can use machine learning to identify important information so that analysts can focus on what is most significant”.