In the last decade, artificial intelligence (AI) has elicited both dreams of a massive transformation in the tech industry and a deep anxiety surrounding its potential ramifications. Elon Musk, a leading voice in the tech industry, has demonstrated this duality. He simultaneously is promising a world of autonomous AI-powered cars while warning us of the risks associated with AI, even calling for a pause in the development of AI. This is especially ironic considering Musk was an early investor in OpenAI, founded in 2015.
One of the most exciting and concerning developments riding the current wave of AI research is autonomous AI. Autonomous AI systems can perform tasks, make decisions, and adapt to new situations on their own, without continual human oversight or task-by-task programming. One of the best-known examples at the moment is ChatGPT, a major milestone in the evolution of artificial intelligence. Let’s look at how ChatGPT came about, where it’s headed, and what the technology can tell us about the future of AI.
Building towards autonomous AI
The tale of artificial intelligence is a captivating one of progress and collaboration across disciplines. It began in the early 20th century with the pioneering efforts of Santiago Ramón y Cajal, a neuroscientist who used his understanding of the human brain to create the concept of neural networks, a cornerstone of modern AI. Neural networks are computer systems that emulate the structure of the human brain and nervous system to produce machine-based intelligence. Some time later, Alan Turing was busy developing the modern computer and proposing the Turing Test, a means of evaluating if a machine could display human-like intelligent behavior. These developments spurred a wave of interest in AI.
As a result, the 1950s saw John McCarthy, Marvin Minsky, and Claude Shannon explore the prospects of AI, and Frank Rosenblatt coined the term “artificial intelligence.” The following decades saw two major breakthroughs. The first was expert systems, which are AI systems that are individually designed to perform niche, industry-specific tasks. The second were natural language processing applications, like early chatbots. With the arrival of large datasets and ever-improving computing power in the 2000s and 2010s, machine learning techniques flourished, leading us to autonomous AI.
This significant step enables AI systems to perform complex tasks without the need of case-by-case programming, opening them to a wide range of uses. One such autonomous system – Chat GPT from OpenAI – has of course recently become widely known for its amazing ability to learn from vast amounts of data and generate coherent, human-like responses.
What made autonomous AI possible?
So what is the basis of ChatGPT? We humans have two basic capabilities that enable us to think. We possess knowledge, whether it’s about physical objects or concepts, and we possess an understanding of those things in relation to complex structures like language, logic, etc. Being able to transfer that knowledge and understanding to machines is one of the toughest challenges in AI.
With knowledge alone, OpenAI’s GPT-4 model couldn’t handle more than a single piece of information. With context alone, the technology couldn’t understand anything about the objects or concepts it was contextualizing. But combine both, and something remarkable happens. The model can become autonomous. It can understand and learn. Apply that to text, and you have ChatGPT. Apply it to cars, and you have autonomous driving, and so on.
OpenAI isn’t alone in its field, and many companies have been developing machine learning algorithms and utilizing neural networks to produce algorithms that can handle both knowledge and context for decades. So what changed when ChatGPT came to the market? Some people have pointed to the staggering amount of data provided by the internet as the big change that fueled ChatGPT. However, if that were all that was needed, it’s likely that Google would have beaten OpenAI because of Google’s dominance over all of that data. So how did OpenAI do it?
One of OpenAI’s secret weapons is a new tool called reinforcement learning from human feedback (RLHF). OpenAI used RHLF to train the OpenAI algorithm to understand both knowledge and context. OpenAI didn’t create the idea of RLHF, but the company was among the first to rely on it so wholly for the development of a large language model (LLM) like ChatGPT.
RLHF simply allowed the algorithm to self-correct based on feedback. So while ChatGPT is autonomous in how it produces an initial response to a prompt, it has a feedback system that lets it know whether its response was accurate or in some way problematic. That means it can constantly get better and better without significant programming changes. This model resulted in a fast-learning chat system that quickly took the world by storm.
Will autonomous AI replace human workers?
The new age of autonomous AI has begun. In the past, we had machines that could understand various concepts to a degree, but only in highly specific domains and industries. For example, industry-specific AI software has been used in medicine for some time. But the search for autonomous or general AI – meaning AI that could function on its own to perform a wide variety of tasks in various fields with a degree of human-like intelligence – finally produced globally noteworthy results in 2022, when Chat GPT handily and decisively passed the Turing test.
Understandably, some people are starting to fear that their expertise, jobs, and even uniquely human qualities may get replaced by intelligent AI systems like ChatGPT. On the other hand, passing the Turing test isn’t an ideal indicator for how “human-like” a particular AI system may be.
For example, Roger Penrose, who won the Nobel Prize in Physics in 2020, argues that passing the Turing test does not necessarily indicate true intelligence or consciousness. He argues that there is a fundamental difference between the way that computers and humans process information and that machines will never be able to replicate the type of human thought processes that give rise to consciousness.
So passing the Turing test is not a true measure of intelligence, because it merely tests a machine's ability to imitate human behavior, rather than its ability to truly understand and reason about the world. True intelligence requires consciousness and the ability to understand the nature of reality, which cannot be replicated by a machine. That means that, far from replacing us, ChatGPT and other similar software will simply provide tools to help us improve and increase efficiency in a variety of fields.
So, machines will be able to complete many tasks autonomously, in ways we never thought possible from understanding and writing content, to securing vast amounts of information, performing delicate surgeries, and driving our cars. But, for now, at least in this current age of technology, capable workers needn’t fear for their jobs. Even autonomous AI systems don’t have human intelligence. They can just understand and perform better than us humans at certain tasks. They aren’t more intelligent than us overall, and they don't pose a significant threat to our way of life; at least, not in this wave of AI development.