Connect with us

Thought Leaders

Thinking Like a Human: Can AI Develop Analogical Reasoning?

mm

When faced with something new, human beings instinctively reach for comparisons. A child learning about atoms might hear that electrons orbit the nucleus “like planets orbit the sun.” An entrepreneur might pitch their startup as “Uber for pet grooming.” A scientist may tell a non-specialist audience that the brain processes information “like a computer.”

This mental leap – seeing how one thing resembles another in its deeper structure – is called analogical reasoning. And it may be the ingredient that separates human intelligence from AI in its current form. If we are to ever develop Artificial General Intelligence – the Holy Grail of AI that has so far proved elusive – we must figure out if it is even possible for machines to learn to think analogically. The stakes could not be higher. If the answer is “No,” then even the most sophisticated AI systems will forever remain nothing more than glorified calculators. They will be unable to solve problems that require more than a reshuffling of the data they have been training on.

The architecture of understanding

Analogical reasoning works at the level of structural, rather than surface-level, similarities. For instance, what makes hearts and water pumps similar? Certainly not their physical appearance. It is the fact that both perform the exact same function, namely circulating fluid through a system. And it is precisely this ability to map relationships typical in one context onto another context that makes human learning, creativity, and problem-solving so unique.

There is no shortage of real-world examples. Take August Kekulé, the brilliant German chemist, who received a hint about the structure of benzene in the form of a dream where he saw a snake biting its own tail. Today, programmers apply lessons from organizing a kitchen when structuring code, and teachers explain electrical current by comparing it to water flowing through pipes.

Current AI systems, however, find this common cognitive skill very difficult. When prompted, modern large language models (LLMs) are only too happy to explain why “time is money,” or to complete verbal reasoning puzzles. But mounting evidence suggests they are often engaging in sophisticated pattern matching, rather than genuine structural mapping. When researchers present these models with novel analogical problems that deviate from their training data, performance often plummets. This is because LLMs excel at reproducing analogies they have seen before but falter when asked to forge new connections.

No analogical reasoning, no AGI

Evidently, analogical reasoning is the sine qua non of AGI. Without it, AI systems remain brittle, unable to adapt knowledge that is relevant in one domain to solve problems in another. For instance, imagine a self-driving car that has learned to navigate sunny California streets but cannot extrapolate that learning to handle snowy conditions. The car’s AI system is an expensive pattern matcher, not a system capable of bona fide intelligence. True intelligence would require the cognitive flexibility to recognize that driving on icy roads is structurally comparable to other slippery-surface scenarios, even if the specifics differ.

The same principle applies in domains beyond autonomous vehicles, of course. Analogical thinking also drives progress in science, medical diagnosis, legal reasoning, and creative endeavors. AI systems without this capacity  resemble a scholar who has memorized an entire library but cannot synthesize that knowledge across disciplines. Impressive, sure, but only in a narrowly limited way.

Building the analogical mind

So, what would it take to develop AI systems capable of human-like analogical reasoning? Based on emerging research and the fundamental nature of analogical thinking, several critical conditions and techniques appear to be necessary.

Structurally rich and diverse training data

The first requirement is to have AI systems  trained on data that goes beyond surface-level text patterns. The internet, with its vast repository of scientific papers, technical documentation, creative works, and explanatory content, is a good jumping-off point. But not just any internet data will do. What is required is structural diversity. In other words, to guide AI systems towards learning to recognize abstract patterns, developers should start exposing them to contrasts from day one of training. Their training data could feature architectural blueprints alongside musical scores, mathematical proofs together with poetry, or legal arguments next to cooking recipes. Since each domain embodies different types of relational structures, an AGI-to-be would benefit from this kind of exercise.

More importantly, this data needs to preserve and highlight structural relationships, not just statistical correlations. Knowledge graphs, causal diagrams, and explicitly mapped relationships between concepts could help AI systems learn to “see” structure rather than memorize associations mechanically. Think of it as teaching AI not just what things are, but how they relate to each other in principled ways.

Testing beyond the training set

To make sure AI systems are learning to reason analogically, and not merely improving their mimicry skills, we need tools that deliberately probe their ability to map structure onto situations they have never encountered before. This entails constructing test problems that are intentionally dissimilar from anything likely to appear in training data – what researchers call “counterfactual” tasks.

For instance, instead of asking an AI to complete standard analogies like “puppy is to dog as kitten is to ____,” we might present it with problems using invented concepts or ask it to map relationships between domains it has never seen connected. Can it recognize that the relationship between ingredients and a recipe parallels the relationship between evidence and a legal argument, even if it has never encountered that specific comparison? Such tests would reveal whether the system grasps underlying structures or merely recalls similar examples.

Measuring what matters

The good news for AI developers is that there is decades’ worth of cognitive science research dealing specifically with how humans process analogies. They can use this research to develop robust benchmarks for analogical reasoning.  However, these benchmarks must go beyond simply counting correct answers on analogy tests. What is really needed are metrics that capture whether AI systems can identify which relationships are relevant to map, while ignoring superficial similarities and maintaining consistency across their mappings.

This might involve scoring systems that reward identifying higher-order relationships. For example, an AI will score higher if it can not only recognize that both atoms and solar systems involve orbiting, but also understand the causal relationships that govern those orbits. Another competence to evaluate could be whether AI can spontaneously generate appropriate analogies to explain novel concepts, not just complete pre-structured analogy problems.

Scaffolding through prompting

Recent research suggests that AI’s ability to think analogically depends to a large extent on how it is asked to do so. Analogical prompting – explicitly guiding models through the process of structural mapping – can elicit more sophisticated reasoning than simply presenting problems cold. This might involve first asking the system to identify relationships in a source domain, then explicitly requesting it to map those relationships onto a target domain.

This technique could serve dual purposes: improving current AI systems’ analogical abilities while also generating training data for future models. By recording successful instances of guided analogical reasoning, examples can be created that can teach subsequent systems to engage in this process more naturally.

Hybrid architectures

Achieving human-like analogical reasoning might require moving beyond pure neural network approaches. Hybrid systems that combine pattern recognition with symbolic reasoning – explicitly representing and manipulating structural relationships – could provide the missing piece. While neural networks excel at learning implicit patterns, symbolic systems can enforce the structural consistency and logical mapping that analogical reasoning demands.

Hybrid architectures are still in their infancy, but researchers are actively exploring their potential. Some, for instance, argue for combining neural networks with symbolic reasoning that might lead to enhanced analogical capabilities. Others promote hybrid models built to address AI models’ tendency to confabulate and think analogically in a shallow way.

Where next?

Depending on who you ask, analogical reasoning is either already emergent or AIs are simply getting more sophisticated in their mimicry. Whichever position is closer to the truth, it is clear that, if the dream of AGI is to be realized, it will take more than just larger models or more data. It will also require some fundamental innovations in how we structure, train, and evaluate our AI systems.

As AI’s transformative capabilities unfold, analogical reasoning comes to represent both a critical benchmark for performance and a sobering reminder of the gap between AI’s current capabilities and genuine human cognition. When an AI system can see that democracy is to citizens what an orchestra is to musicians – recognizing not surface features but deep structural relationships about coordination, representation, and emergent harmony – it will have crossed a crucial threshold toward true intelligence.

For over 13 years, Gediminas Rickevicius has been a force of growth in market-leading IT, advertising, and logistics companies around the globe. He has been changing the traditional approach to business development and sales by integrating big data into strategic decision-making. As the Senior VP of Global Partnerships at Oxylabs, Gediminas continues his mission to empower businesses with state-of-the-art public web data gathering solutions.