stub Advanced AI Technologies Present Ethical Challenges - Thought Leaders - Unite.AI
Connect with us

Thought Leaders

Advanced AI Technologies Present Ethical Challenges – Thought Leaders

mm

Published

 on

By Alfred Crews, Jr the Vice president & chief counsel for the Intelligence & Security sector of BAE Systems Inc.

Earlier this year, before the global pandemic, I attended The Citadel’s Intelligence Ethics Conference in Charleston, where we discussed the topic of ethics in intelligence collection as it relates to protecting national security. In the defense industry, we are seeing the proliferation of knowledge, computing, and advanced technologies, especially in the area of artificial intelligence (AI) and machine learning (ML). However, there could be significant issues when deploying AI within the context of intelligence gathering or real-time combat.

AI coupled with quantum computing presents risks

What we must question, analyze and determine a path forward is when using AI coupled with quantum computing capabilities in the process of war-time decision making. For example, remember the Terminator? As our technology makes leaps and bounds, the reality of what Skynet presented is before us. We could be asking ourselves, “Is Skynet coming to get us?” Take a stroll down memory lane with me; the AI machines took over because they had the capability to think and make decisions on their own, without a human to direct it. When the machines deducted that humans were a bug, they set out to destroy humankind. Don’t get me wrong, AI has great potential, but I believe it must have control parameters because of the risk factor involved.

AI’s ethical ambiguities & philosophical dilemma

I believe this is precisely why the U.S. Department of Defense (DoD) issued its own Ethical Principles for AI, because the use of AI raises new ethical ambiguities and risks. When combining AI with quantum computing capabilities, the ability to make decisions changes and the risk of losing control increases –more than we might realize today. Quantum computing puts our human brain’s operating system to shame because super computers can make exponentially more calculations quicker and with more accuracy than our human brains will ever be able to.

Additionally, the use of AI coupled with computing presents a philosophical dilemma. At what point will the world allow machines to have a will of their own; and, if machines are permitted to think on their own, does that mean the machine itself has become self-aware? Does being self-aware constitute life? As a society, we have not yet determined how to define this situation. Thus, as it stands today, machines taking action on their own without a human to control it, could lead to ramifications. Could a machine override a human’s intervention to stop fire? If the machine is operating on its own, will we be able to pull the plug?

As I see it, using AI from a defensive standpoint is easy to do. However, how much easier would it be to transfer to the offensive? On the offense, machines would be making combat firing decisions on the spot.  Would a machine firing down an enemy constitute a violation of the Geneva Convention and laws of armed conflict? Moving into this space at a rapid rate, the world must agree that the use of AI and quantum computing in combat must play into the laws we currently have in place.

The DoD has a position when using AI with autonomous systems and states that there will always be a person engaged with the decision making process; a person would make the final call on pulling a trigger to fire a weapon. That’s our rule, but what happens if an adversary decides to take another route and have an AI-capable machine make all the final decisions? Then the machine, which, as we discussed, is already faster, smarter and more accurate, would have the advantage.

Let’s look at a drone equipped with AI and facial recognition: The drone fires on its own will because of a pre-determined target labelled as a terrorist. Who is actually responsible for the firing? Is there accountability if there is a biased mistake?

Bias baked in to AI/ML

Research points to the fact that a machine is less likely to make mistakes than a human would. However, research also proves there are bias in machine learning based on the human “teacher” teaching the machine. The DoD’s five Ethical Principles of AI referenced existing biases when it states, “The Department will take deliberate steps to minimize unintended bias in AI capabilities.” We already know through proven studies that in the use of facial recognition applications there are bias toward people of color with false positives. When a person creates the code that teaches the machine how to make decisions, there will be biases. This could be unintentional because the person creating the AI was not aware of the bias that existed within themselves.

So, how does one eliminate bias? AI output is only as good as the input. Therefore, there must be controls. You must control the data flowing in because that is what could make AI results less valid. Developers will constantly have to re-write the code to eliminate the bias.

The world to define best use of technology  

Technology in and of itself is not good or bad. It is how a nation puts it to use that could take the best of intentions and have it go wrong.  As technology advances in ways that impact human lives, the world must work together to define appropriate action. If we take the human out of the equation in AI applications, we also take that pause before pulling the trigger – that moral compass that guides us; that pause when we stop and question, “Is this right?” A machine taught to engage will not have that pause. So, the question is, in the future, will the world stand for this? How far will the world go to allow machines to make combat decisions?

Alfred Crews, Jr. is vice president and chief counsel for the Intelligence & Security sector of BAE Systems Inc, a leader in providing large-scale systems engineering, integration, and sustainment services across air, land, sea, space, and cyber domains for the U.S. Department of Defense, intelligence community, federal civilian agencies, and troops deployed around the world. Crews oversees the sector’s legal, export control, and ethics functions.