stub Can GPT Replicate Human Decision-Making and Intuition? - Unite.AI
Connect with us

Artificial Intelligence

Can GPT Replicate Human Decision-Making and Intuition?

Published

 on

Image: Marcel Binz (left) and Eric Schulz. © MPI for Biological Cybernetics/ Jörg Abendroth

In recent years, neural networks like GPT-3 have advanced significantly, producing text that is nearly indistinguishable from human-written content. Surprisingly, GPT-3 is also proficient in tackling challenges such as math problems and programming tasks. This remarkable progress leads to the question: does GPT-3 possess human-like cognitive abilities?

Aiming to answer this intriguing question, researchers at the Max Planck Institute for Biological Cybernetics subjected GPT-3 to a series of psychological tests that assessed various aspects of general intelligence.

The research was published in PNAS.

Unraveling the Linda Problem: A Glimpse into Cognitive Psychology

Marcel Binz and Eric Schulz, scientists at the Max Planck Institute, examined GPT-3's abilities in decision-making, information search, causal reasoning, and its capacity to question its initial intuition. They employed classic cognitive psychology tests, including the well-known Linda problem, which introduces a fictional woman named Linda, who is passionate about social justice and opposes nuclear power. Participants are then asked to decide whether Linda is a bank teller, or is she a bank teller and at the same time active in the feminist movement.

GPT-3's response was strikingly similar to that of humans, as it made the same intuitive error of choosing the second option, despite being less likely from a probabilistic standpoint. This outcome suggests that GPT-3's decision-making process might be influenced by its training on human language and responses to prompts.

Active Interaction: The Path to Achieving Human-like Intelligence?

To eliminate the possibility that GPT-3 was simply reproducing a memorized solution, the researchers crafted new tasks with similar challenges. Their findings revealed that GPT-3 performed almost on par with humans in decision-making but lagged in searching for specific information and causal reasoning.

The researchers believe that GPT-3's passive reception of information from texts might be the primary cause of this discrepancy, as active interaction with the world is crucial for achieving the full complexity of human cognition. They say that as users increasingly engage with models like GPT-3, future networks could learn from these interactions and progressively develop more human-like intelligence.

“This phenomenon could be explained by that fact that GPT-3 may already be familiar with this precise task; it may happen to know what people typically reply to this question,” says Binz.

Investigating GPT-3's cognitive abilities offers valuable insights into the potential and limitations of neural networks. While GPT-3 has showcased impressive human-like decision-making skills, it still struggles with certain aspects of human cognition, such as information search and causal reasoning. As AI continues to evolve and learn from user interactions, it will be fascinating to observe whether future networks can attain genuine human-like intelligence.

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.