A diverse team of engineers, biologists, and mathematicians at the University of Michigan has developed a defense system for neural networks based on the adaptive immune system. The system can defend neural networks against various types of attacks.
Nefarious groups can adjust the input of a deep learning algorithm to direct it the wrong way, which can serve as a major problem for applications like identification, machine vision, natural language processing (NLP), language translation, feud detection, and more.
Robust Adversarial Immune-Inspired Learning System
The newly constructed defense system is called the Robust Adversarial Immune-Inspired Learning System. The work was published in IEEE Access.
Alfred Hero is the John H. Holland Distinguished University professor. He co-led the work.
“RAILS represents the very first approach to adversarial learning that is modeled after the adaptive immune system, which operates differently than the innate immune system,” Hero said.
The team found that deep neural networks, which are already inspired by the brain, can also mimic the biological process of the mammalian immune system. This immune system generates new cells that are designed to defend against specific pathogens.
Indika Rajapakse is associate professor of computational medicine and bioinformatics, as well as co-leader of the study.
“The immune system is built for surprises. It has an amazing design and will always find a solution,” Rajapakse said.
Mimicking the Immune System
RAILS mimics the natural defenses of the immune system, which enables it to identify and address suspicious inputs to the neural network. The biological team first studied how the adaptive immune systems of mice responded to an antigen before creating a model of the immune system.
Data analysis on the information was then carried out by Stephen Lindsly, who was a doctoral student in bioinformatics at the time. Lindsly helped translate this information between the biologists and engineers, which enabled Hero’s team to model the biological process on computers. To do this, the team blended biological mechanisms into the code.
RAILS defenses were tested with adversarial inputs.
“We weren’t sure that we had really captured the biological process until we compared the learning curves of RAILS to those extracted from the experiments,” Hero said. “They were exactly the same.”
RAILS outperformed two of the most common machine learning processes that are currently used to fight adversarial attacks. These two processes are Roust Deep k-Nearest Neighbor and convolutional neural networks.
Ren Wang is a research fellow in electrical and computer engineering. He was largely responsible for the development and implementation of the software.
“One very promising part of this work is that our general framework can defend against different types of attacks,” said Ren Wang.
The researchers then used image identification as a test case to evaluate RAILS against eight types of adversarial attacks in various datasets. It demonstrated improvement in all cases, and it even protected against Projected Gradient Descent attack, which is the most damaging type of adversarial attack. RAILS also improved overall accuracy.
“This is an amazing example of using mathematics to understand this beautiful dynamical system,” Rajapakse said. “We may be able to take what we learned from RAILS and help reprogram the immune system to work more quickly.”