An event pitting an AI-controlled fighter plane against a human pilot in a virtual dogfight was recently held, with the end result that the AI managed to defeat its human opponent, adding another example of AIs outclassing humans at even extraordinarily complex tasks.
As reported by DefenseOne, the recent virtual dogfight was orchestrated by the US military as part of an ongoing effort to demonstrate the capability of autonomous agents to defeat aircraft in dogfights, a project called the AlphaDogFight challenge. The Defense Advanced Research Project Agency (DARPA) chose eight teams of AIs developed by various defense contractors, and pitted these AI teams against each other in virtual dogfights. The winner of this tournament was an AI developed by Heron Systems, and afterward the AI was pitted against a human pilot who wore a VR helmet and sat in a flight simulator. The AI reportedly won all five rounds it played.
The AI developed by Heron Systems was a deep reinforcement learning system. Deep reinforcement learning is the process of allowing an AI agent to experiment in an environment again and again, learning from trial and error. Lockheed Martin’s AI was the runner up in the competition and it also utilized a deep reinforcement learning system. Lockheed Martin engineers and directors explained that developing algorithms that can perform well in air combat is a much different task to simply designing an algorithm that can fly and maintain particular orientations and altitude. The AI algorithms must come to understand not only that there are penalties to certain actions, but that not all penalties are equally weighted. Some actions have very severe consequences compared to other actions, such as crashing. This must be done by assigning weights to every possible action and then adjusting these weights based on the experiences that the agent has.
Heron Systems said that they trained their model by putting it through over 4 billion simulations, and that the model had acquired around 12 years of experience as a result. However, the AI was not permitted to learn from its experiences in the combat trials themselves. It’s unclear how the results of the contest would have changed if the model was allowed to learn from the contest rounds. If the contest had gone on longer, there may have been a different result as well. The human pilot was able to adapt to the AI’s tactics after a few rounds, and became able to last much longer against the AI by the end of the game. It was just a little too late the time the pilot had adapted.
This is actually the second time that an AI has beaten a human in a simulated dogfight. In 2016, an AI system defeated a fighter jet instructor. The recent DARPA simulation was more robust than the 2016 trial, due to the fact that numerous AIs were pitted against each other to find the best one before it took on the human pilot.
The director of DARPA’s Strategic Technology Office, Timothy Grayson, was quoted as saying that the trial aims to better understand how machines and humans interact and to build better human-machine teams. As Grayson was quoted by:
“I think what we’re seeing today is the beginning of something I’m going to call human-machine symbiosis… Let’s think about the human sitting in the cockpit, being flown by one of these AI algorithms as truly being one weapon system, where the human is focusing on what the human does best [like higher-order strategic thinking] and the AI is doing what the AI does best.”
- Humayun Sheikh, CEO of Fetch.ai – Interview Series
- Research Team Aims to Create Explainable AI for Nuclear Nonproliferation and Nuclear Security
- Pieter VanIperen, Founder & Managing Partner of PWV Consultants – Cybersecurity Interviews
- Advance in Microchips Brings Us Closer to AI Edge Computing
- AI Researchers Create Video Game Playing Model That Can Remember Past Events