stub The Pentagon Is Developing New AI Technology - Unite.AI
Connect with us

Ethics

The Pentagon Is Developing New AI Technology

Updated on

The United States Defense Department is stepping up their efforts in developing artificial intelligence (AI) technology. This comes at a time when other nations around the world, especially China, are racing ahead in the AI arms race. 

The Pentagon is working on developing an artificial intelligence-powered drone swarm that is able to operate independently. They will be capable of identifying and tracking targets. There are other efforts including intelligence fusion, “all-domain” command and control, and autonomous systems.

The Pentagon’s Joint Artificial Intelligence Center, or JAIC, requested that AI developers and drone swarm builders collaborate to help with search and rescue missions. The JAIC has four core research areas, and search and rescue missions are part of humanitarian and disaster relief. The same program also plays a role in developing AI solutions for predictive maintenance, cyberspace operations, and robotic process automation.

The objective of the request for information (RFI) is to seek out a full-stack search and rescue drone swarm capable of self-piloting. The Pentagon also wants to be able to detect humans and other targets, as well as transmit data and video back to a central location. It also seeks out algorithms developed by companies or teams, machine training processes and data to supplement what the government already has. 

If everything works out as planned, the government would have a contract with multiple vendors “that together could provide the capability to fly to a predetermined location/area, find people and manmade objects–through onboard edge processing–and cue analysts to look at detections sent via a data link to a control station,” as written in the RFI. “Sensors shall be able to stream full motion video to an analyst station during the day or night; though, the system will not normally be streaming as the AI will be monitoring the imagery instead of a person.”

The system being sought out would be required to have enough processing power so that the AI can operate without any human intervention. It should be capable of detecting and monitoring, as well as streaming live video to an operator. That human operator would then be able to take control of the drones. 

These developments come at a time when the Pentagon is diving further into the world of artificial intelligence (AI). 

Back in October, the Defense Innovation Board, a Pentagon advisory organization, published a list of ethical principles as a guideline for the development of AI-enabled weapons. These guidelines are meant to help control how these weapons are used on the battlefield. There are no legal aspects to the board’s recommendations, and the Pentagon is able to decide whether or not they are used. 

According to Lt. Gen. Jack Shanahan, director of the Defense Department’s Joint Artificial Intelligence Center, he hopes that the recommendations will lead to the responsible and ethical use of AI. 

“The DIB’s recommendations will help enhance the DOD’s commitment to upholding the highest ethical standards as outlined in the DOD AI strategy, while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations,” Shanahan said in a statement emailed to reporters.

Back in 2017, 116 technology executives requested that the United Nations pursue an all-out ban on autonomous weapons. At the same time, Google banned the use of its AI algorithm in any weapons system. That came after employees began to complain about the company’s role in a program to analyze drone footage. Companies such as Microsoft and Amazon are currently working with the military while pushing for a better approach to the technology. 

 

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.