stub Tech Advisory Group Pushes For Limits On Pentagon's AI Use - Unite.AI
Connect with us

Regulation

Tech Advisory Group Pushes For Limits On Pentagon’s AI Use

mm
Updated on

The Pentagon has made its intentions to invest heavily in artificial intelligence clear, stating that AI will make the US military more powerful and robust to possible national security threats. As Engadget reports, this past Thursday the Defense Innovation Board pushed forward a number of proposed ethical guidelines for the use of AI in the military. The list of proposals includes strategies to avoid unintended bias and governable AI that would have emergency stopping procedures that prevent the AI from causing unnecessary harm.

Wired reports that the Defense Innovation Board was created by the Obama Administration to guide the Pentagon in acquiring tech industry experience and talent. The board is currently chaired by the former CEO of Google, Eric Schmidt. The department was recently tasked with establishing guidelines for the ethical implementation of AI in military projects. On Thursday the board put out their guidelines and recommendations for review. The report notes that the time to have serious discussions about the use of AI in a military context is now before some serious incident mandates that there must be one.

According to Artificial Intelligence News, a former military official recently stated that the Pentagon was falling behind when it comes to the use of AI. The Pentagon is aiming to make up this difference and it has declared the development and expansion of AI in the military to be a national priority. AI ethicists are concerned that in the Pentagon’s haste to become a leader in AI, AI systems may be used in unethical ways. While various independent AI ethics boards have made their own suggestions, the Defense Innovation Board has proposed at least five principles that the military should follow at all times when developing and implementing AI systems.

The first principle proposed by the board is the principle that humans should always be responsible for the utilization, deployment, and outcomes of any artificial intelligence platform used in a military context. This is reminiscent of a 2012 policy that mandated that humans should ultimately be part of the decision making process whenever lethal force could be used. There are a number of other principles on the list which provide general advice like making sure that AI systems are always built by engineers who understand and thoroughly document their programs. Another principle advises that military AI systems should always be tested for their reliability. These guidelines seem to be common sense, but the board wants to underscore their importance.

The other principles in the list are involved in the control of bias for AI algorithms and the ability of AIs to detect if unintended harm may be caused and to automatically disengage. The guidelines specify that if unnecessary harm will occur, the AI should be able to disengage itself and have a human operator take over. The draft of principles also recommends that the output of AIs be traceable so that analysts can see what led to the AI making a given decision.

The set of recommendations pushed forward by the board underscore two different ideas. The principles are reflective of the fact that AI will be integral to the future of military operations, but that much of AI still relies on human management and decision making.

While the Pentagon doesn’t have to adopt the recommendations of the board, it sounds as if the Pentagon is taking the recommendations seriously. As reported by Wired, the director of the Joint Artifical Intelligence Center, Lieutenant General Jack Shanahan, stated the board’s recommendations would assist the Pentagon in  “upholding the highest ethical standards as outlined in the DoD AI strategy, while embracing the US military's strong history of applying rigorous testing and fielding standards for technology innovations.”

The tech industry as a whole remains wary of using AI in the creation of military hardware and software. Microsoft and Google employees have both protested over collaborations with military entities, and Google has recently elected not to renew the contract that had them contributing to Project Maven. A number of CEOs, scientists, and engineers have also signed a pledge not to “participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”. If the Pentagon does adopt the guidelines suggested by the board, it could make the tech industry more willing to collaborate on projects, though only time will tell.

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.