stub Is AI an Existential Threat? - Unite.AI
Connect with us

Artificial General Intelligence

Is AI an Existential Threat?

mm
Updated on

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize.  We will explore two different types of AI,  Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI).

Artificial Narrow Intelligence Threats

To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.

With its focus on whatever operation it is responsible for, ANI systems are unable to use generalized learning in order to take over the world. That is the good news; the bad news is that with its reliance on a human operator the AI system is susceptible to biased data, human error, or even worse, a rogue human operator.

AI Surveillance

There may be no greater danger to humanity than humans using AI to invade privacy, and in some cases using AI surveillance to completely prevent people from moving freely.  China, Russia, and other nations passed through regulations during COVID-19 to enable them to monitor and control the movement of their respective populations. These are laws which once in place, are difficult to remove, especially in societies that feature autocratic leaders.

In China, cameras are stationed outside of people's homes, and in some cases inside the person's home. Each time a member of the household leaves, an AI monitors the time of arrival and departure, and if necessary alerts the authorities. As if that was not sufficient, with the assistance of facial recognition technology, China is able to track the movement of each person every time they are identified by a camera. This offers absolute power to the entity controlling the AI, and absolutely zero recourse to its citizens.

Why this scenario is dangerous, is that corrupt governments can carefully monitor the movements of journalists, political opponents, or anyone who dares to question the authority of the government. It is easy to understand how journalists and citizens would be cautious to criticize governments when every movement is being monitored.

There are fortunately many cities that are fighting to prevent facial recognition from infiltrating their cities. Notably, Portland, Oregon has recently passed a law that blocks facial recognition from being used unnecessarily in the city. While these changes in regulation may have gone unnoticed by the general public, in the future these regulations could be the difference between cities that offer some type of autonomy and freedom, and cities that feel oppressive.

Autonomous Weapons and Drones

Over 4500 AI researches have been calling for a ban on autonomous weapons and have created the Ban Lethal Autonomous Weapons website. The group has many notable non-profits as signatories such as Human Rights Watch, Amnesty International, and the The Future of Life Institute which in itself has a stellar scientific advisory board including Elon Musk, Nick Bostrom, and Stuart Russell.

Before continuing I will share this quote from The Future of Life Institute which best explains why there is clear cause for concern: “In contrast to semi-autonomous weapons that require human oversight to ensure that each target is validated as ethically and legally legitimate, such fully autonomous weapons select and engage targets without human intervention, representing complete automation of lethal harm. ”

Currently, smart bombs are deployed with a target selected by a human, and the bomb then uses AI to plot a course and to land on its target. The problem is what happens when we decide to completely remove the human from the equation?

When an AI chooses what humans need targeting, as well as the type of collateral damage which is deemed acceptable we may have crossed a point of no return. This is why so many AI researchers are opposed to researching anything that is remotely related to autonomous weapons.

There are multiple problems with simply attempting to block autonomous weapons research. The first problem is even if advanced nations such as Canada, the USA, and most of Europe choose to agree to the ban, it doesn't mean rogue nations such as China, North Korea, Iran, and Russia will play along. The second and bigger problem is that AI research and applications that are designed for use in one field, may be used in a completely unrelated field.

For example, computer vision continuously improves and is important for developing autonomous vehicles, precision medicine, and other important use cases. It is also fundamentally important for regular drones or drones which could be modified to become autonomous.  One potential use case of advanced drone technology is developing drones that can monitor and fight forest fires. This would completely remove firefighters from harms way. In order to do this, you would need to build drones that are able to fly into harms way, to navigate in low or zero visibility, and are able to drop water with impeccable precision. It is not a far stretch to then use this identical technology in an autonomous drone that is designed to selectively target humans.

It is a dangerous predicament and at this point in time, no one fully understands the implications of advancing or attempting to block the development of autonomous weapons. It is nonetheless something that we need to keep our eyes on, enhancing whistle blower protection may enable those in the field to report abuses.

Rogue operator aside, what happens if AI bias creeps into AI technology that is designed to be an autonomous weapon?

AI Bias

One of the most unreported threats of AI is AI bias. This is simple to understand as most of it is unintentional. AI bias slips in when an AI reviews data that is fed to it by humans, using pattern recognition from the data that was fed to the AI, the AI incorrectly reaches conclusions which may have negative repercussions on society. For example, an AI that is fed literature from the past century on how to identify medical personnel may reach the unwanted sexist conclusion that women are always nurses, and men are always doctors.

A more dangerous scenario is when AI that is used to sentence convicted criminals is biased towards giving longer prison sentences to minorities. The AI's criminal risk assessment algorithms are simply studying patterns in the data that has been fed into the system. This data indicates that historically certain minorities are more likely to re-offend, even when this is due to poor datasets which may be influenced by police racial profiling. The biased AI then reinforces negative human policies. This is why AI should be a guideline, never judge and jury.

Returning to autonomous weapons, if we have an AI which is biased against certain ethnic groups, it could choose to target certain individuals based on biased data, and it could go so far as ensuring that any type of collateral damage impacts certain demographics less than others. For example, when targeting a terrorist, before attacking it could wait until the terrorist is surrounded by those who follow the Muslim faith instead of Christians.

Fortunately, it has been proven that AI that is designed with diverse teams are less prone to bias. This is reason enough for enterprises to attempt when at all possible to hire a diverse well-rounded team.

Artificial General Intelligence Threats

It should be stated that while AI is advancing at an exponential pace, we have still not achieved AGI. When we will reach AGI is up for debate, and everyone has a different answer as to a timeline. I personally subscribe to the views of Ray Kurzweil, inventor, futurist, and author of ‘The Singularity is Near” who believes that we will have achieved AGI by 2029.

AGI will be the most transformational technology in the world. Within weeks of AI achieving human-level intelligence, it will then reach superintelligence which is defined as intelligence that far surpasses that of a human.

With this level of intelligence an AGI could quickly absorb all human knowledge and use pattern recognition to identify biomarkers that cause health issues, and then treat those conditions by using data science. It could create nanobots that enter the bloodstream to target cancer cells or other attack vectors. The list of accomplishments an AGI is capable of is infinite. We've previously explored some of the benefits of AGI.

The problem is that humans may no longer be able to control the AI. Elon Musk describes it this way: ”With artificial intelligence we are summoning the demon.' Will we be able to control this demon is the question?

Achieving AGI may simply be impossible until an AI leaves a simulation setting to truly interact in our open-ended world. Self-awareness cannot be designed, instead it is believed that an emergent consciousness is likely to evolve when an AI has a robotic body featuring multiple input streams. These inputs may include tactile stimulation, voice recognition with enhanced natural language understanding, and augmented computer vision.

The advanced AI may be programmed with altruistic motives and want to save the planet. Unfortunately, the AI may use data science, or even a decision tree to arrive at unwanted faulty logic, such as assessing that it is necessary to sterilize humans,  or eliminate some of the human population in order to control human overpopulation.

Careful thought and deliberation needs to be explored when building an AI with intelligence that will far surpasses that of a human. There have been many nightmare scenarios which have been explored.

Professor Nick Bostrom in the Paperclip Maximizer argument has argued that a misconfigured AGI if instructed to produce paperclips would simply consume all of earths resources to produce these paperclips. While this seems a little far fetched,  a more pragmatic viewpoint is that an AGI could be controlled by a rogue state or a corporation with poor ethics. This entity could train the AGI to maximize profits, and in this case with poor programming and zero remorse it could choose to bankrupt competitors, destroy supply chains, hack the stock market, liquidate bank accounts, or attack political opponents.

This is when we need to remember that humans tend to anthropomorphize. We cannot give the AI human-type emotions, wants, or desires. While there are diabolical humans who kill for pleasure, there is no reason to believe that an AI would be susceptible to this type of behavior. It is inconceivable for humans to even consider how an AI would view the world.

Instead what we need to do is teach AI to always be deferential to a human. The AI should always have a human confirm any changes in settings, and there should always be a fail-safe mechanism. Then again, it has been argued that AI will simply replicate itself in the cloud, and by the time we realize it is self-aware it may be too late.

This is why it is so important to open source as much AI as possible and to have rational discussions regarding these issues.

Summary

There are many challenges to AI, fortunately, we still have many years to collectively figure out the future path that we want AGI to take. We should in the short-term focus on creating a diverse AI workforce, that includes as many women as men, and as many ethnic groups with diverse points of view as possible.

We should also create whistleblower protections for researchers that are working on AI, and we should pass laws and regulations which prevent widespread abuse of state or company-wide surveillance. Humans have a once in a lifetime opportunity  to improve the human condition with the assistance of AI, we just need to ensure that we carefully create a societal framework that best enables the positives, while mitigating the negatives which include existential threats.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.