stub How Asimov's Three Laws of Robotics Impacts AI - Unite.AI
Connect with us

Thought Leaders

How Asimov’s Three Laws of Robotics Impacts AI

mm
Updated on

The Three Laws of Robotics are iconic in the science fiction world, and have become a symbol within the AI and robotics community of how difficult it is to properly design a system that is foolproof.

To fully comprehend the importance of these three laws, we must first learn about the brilliant mind who conceived of these laws the late science fiction author Isaac Asimov. We must then understand how to adapt these laws and have them evolve to protect humanity.

Isaac Asimov – The Rise of a Genius

Isaac Asimov was born in Russia on January 2, 1920, and immigrated to the United States at age three. He grew up in Brooklyn, New York, and graduated from Columbia University in 1939.  He was recognized as a gifted and prolific writer that focused on science and science fiction. During his career he wrote and/or edited over 500 books.

Asimov was greatly inspired by some of the most iconic writers in the science fiction world.  He began his employment at the Philadelphia Navy Yard where he met two of his co-workers, who would soon emerge as two if the most successful science fiction writers in speculative fiction history: L. Sprague de Camp and Robert A. Heinlein.

L. Sprague de Camp is an award winning author who wrote over 100 books and was a major figure in science fiction in the 1930s and 1940s. Some of his most popular works included “Darkness Fall” (1939), “The Wheels of If” (1940), “A Gun for Dinosaur” (1956), “Aristotle and the Gun” (1958), and “The Glory That Was” (1960).

Robert A. Heinlein was quite possibly the most popular science fiction writer in the world during the height of his career. Along with Isaac Asimov, and Arthur C. Clarke he was  considered the “Big Three” of science fiction authors. Some of Robert A. Heinlein's most popular works included “Farnham's Freehold” (1964) and “To Sail Beyond the Sunset” (1987). The current generation probably knows him best for the movie adaptation of his novel “Starship Troopers” (1959).

Being surrounded by these giants of futurism inspired Issac Asimov to launch his prolific writing career. Asimov was also highly respected in the science community and was frequently booked as a public speaker to give talks about science.

The Three Laws of Robotics

Issac Asimov was the first person to use term ‘Robotics' in a short story called ‘Liar!' which was published in 1941.

Shortly after, his 1942 short story “Runaround” introduced the world to his three laws of robotics. The laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These laws were designed to offer interesting plot points, and Asimov went on to create a series of 37 science fiction short stories and six novels that featured positronic robots.

One of these short story collections titled “I, Robot” was later adapted for film in 2004. The “I,Robot” movie starring Will Smith is set in a dystopian 2035, and features highly intelligent public servant robots that operate under the three laws of robotics. The movie much like the stories quickly became a parable of how programming could go wrong, and that programming any type of advanced AI involves a high level of risk.

The world has now caught up to what was previously science fiction, we are now designing AI that is in some ways far more advanced than anything Issac Asimov could have imagined, while at the same time being far more limited.

The three laws of Robotics are referenced quite frequently in discussions of Artificial General Intelligence (AGI). We will quickly explore what AGI is, as well as how the three laws of Robotics must evolve in order to avoid potential issues in the future.

Artificial General Intelligence (AGI)

Currently most types of AI that we encounter on a daily basis are quantified as “narrow AI”. This is a type of AI that is very specific and narrow in its utility function. For example, an autonomous vehicle can navigate streets, but due to its “narrow” limitations the AI cannot easily complete other tasks. Another example of narrow AI would be an image recognition system that can easily identify and label images in a database, but would be unable to easily be adapted to another task.

Artificial General Intelligence which is commonly referred to as “AGI”, is an AI that similar to humans can quickly learn, adapt, pivot, and function in the real-world. It is a type of intelligence that is not narrow in scope, it can adapt to any situation and learn how to handle real-world problems.

It should be stated that while AI is advancing at an exponential pace, we have still not achieved AGI. When we will reach AGI is up for debate, and everyone has a different answer as to a timeline. I personally subscribe to the views of Ray Kurzweil, inventor, futurist, and author of ‘The Singularity is Near” who believes that we will have achieved AGI by 2029.

It is this 2029 timeline that is a ticking clock, we must learn to hard-code a type of rulebook into the AI, that is not only similar to the three laws, but that is more advanced, and able to actually avoid real-world conflict between humans and robots.

Modern Day Laws of Robotics

While the three laws of Robotics were phenomenal for literature, they are significantly lacking in sophistication to seriously program into a robot. This was after all the plot point behind the short stories and the novels. Conflicts between the three laws, or at a minimum the interpretation of the three laws caused the robots to meltdown, retaliate against humans, or other pivotal plot points.

The main problem with the current laws is the ethical programming of always obeying human instructions and of always protecting itself may conflict. After all, is the robot allowed to defend itself against an owner who abuses it?

What type of fail-safe mechanism needs to be programmed in? How do we instruct a robot that it must shut-off no matter what the repercussions? What happens if a robot is in the process of saving a housewife from abuse, should the robot auto shut-off if instructed to do so by the abusive husband?

Who should give the instructions to robots? With autonomous weapons capable of identifying and targeting enemies from across the globe, should the robot be able to refuse a command to eliminate a target if it identifies the target as being a child?

In other words if the robot is owned and controlled by a psychopath, can the robot refuse orders that are immoral? The questions are numerous, and the answers are too difficult for any individual to answer. This is why organizations such as The Future of Life Institute are so important, the time to debate these moral dilemmas is now before a true AGI emerges.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.