stub Researchers Say Humans Would Not Be Able To Control Superintelligent AI - Unite.AI
Connect with us

Artificial General Intelligence

Researchers Say Humans Would Not Be Able To Control Superintelligent AI

Updated on

Anybody who is aware of artificial intelligence (AI) has likely heard some version about it eventually breaking free from human control. This is not just a theme from Sci-Fi movies, but rather a very strong possibility that has many experts in the industry concerned. Many of these experts, including scientists, advocate that we begin preparing for this possibility and avoiding it in whatever ways possible. 

Now, an international team of researchers has taken this idea and backed it up with theoretical calculations. The team, which included scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, used those calculations to demonstrate how it would not be possible to control a superintelligent AI. 

The research was recently published in the Journal of Artificial Intelligence Research

Superintelligent AI System

The fear over a superintelligent AI system has to do with the fact that such a system would be far superior than humans. It would be able to not only learn independently, but it could also access all of the existing data and process it extremely fast. 

Such an event could lead to the superintelligent AI overtaking all existing machines online, and while it could and would do things like cure disease and solve other major problems facing humanity, the risk for things getting out of control is also high. 

Manuel Cebrian is co-author of the study and Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development. 

“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity,” Cebrian says. 

Controlling the System

There are two major schools of thought when it comes to controlling such a system. First, humans could limit the capabilities of a superintelligent AI by restricting its access to some sources of data, such as the entire internet. The system could also function without coming into contact with the outside world. However, the problem with this is that it would drastically limit the AI’s potential.

The system would be controlled by only allowing it to pursue outcomes that would benefit humanity, and this could be done by programming ethical principles into it. 

The study involved the team developing a theoretical containment algorithm that prevents the superintelligent AI from harming humans under any circumstances. This can be achieved by first creating a simulation of the AI’s behavior and detecting any behavior that could be harmful. Despite this promising theory, current analysis shows that such an algorithm cannot be developed. 

Iyad Rahwan is Director of the Center for Humans and Machines.

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable,” says Rahwan.

Another issue is that experts might not even realize when a superintelligent machine reaches that state, mostly due to the fact that it would be more intelligent than humans. 

 

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.