Connect with us

Futurist Series

From Moore’s Law to “OpenAI’s Law”: The Exponential Trajectory of AI Development

mm

Artificial intelligence is advancing at a speed that’s difficult to comprehend. To describe this phenomenon, insiders have begun referencing what some call “OpenAI’s Law”—a modern parallel to Moore’s Law, but far steeper. This term was brought to wider attention in the book Empire of AI, which chronicles the rise of OpenAI and the unfolding race toward artificial general intelligence (AGI). In the book, “OpenAI’s Law” is used to capture the breakneck pace at which compute requirements—and thus AI capabilities—have scaled over the past decade.

While not a formal scientific law, OpenAI’s Law refers to a real and measurable trend: the rapid doubling of computing power used in training frontier AI models, occurring at a pace far faster than Moore’s Law. In practical terms, AI compute has been doubling approximately every three to four months, compared to Moore’s 18–24 months. This exponential curve underpins the modern AI boom and sets the stage for a future that is arriving faster than most expect.

Moore’s Law: The Engine That Powered the Digital Age

Moore’s Laws was the driving force behind the rise of personal computers, smartphones, and cloud computing. It predicted that the number of transistors on a chip would double approximately every two years, leading to exponential gains in computing power, energy efficiency, and cost reduction.

For decades, this simple pattern held true, making each generation of hardware exponentially more powerful than the last. But as physical and economic limits were reached in the 2010s, Moore’s Law began to slow down. Engineers responded by using more cores, 3D chip stacking, and specialized processors to extend performance—but the easy gains were gone.

It was around this time that AI research, powered by deep learning breakthroughs, began diverging from the traditional trajectory of Moore’s Law.

The Birth of OpenAI’s Law: AI’s Explosive Compute Curve

In the early 2010s, researchers discovered that feeding more compute into large-scale neural networks led to increasingly powerful AI capabilities. Starting around 2012, the amount of compute used in the largest AI training runs began doubling roughly every 3 to 4 months.

This was an astonishing acceleration—much faster than Moore’s Law. Over six years, the compute used in state-of-the-art AI models increased by more than 300,000×. While Moore’s Law would have delivered only a 7× increase in that time, AI compute skyrocketed due to aggressive scaling.

This phenomenon became informally known as OpenAI’s Law—a self-imposed trajectory by organizations like OpenAI, which believed that scaling model size and compute was the fastest path to artificial general intelligence (AGI). The book Empire of AI describes this shift in detail, illustrating how OpenAI and its leadership committed to this strategy despite the rising costs, because they believed it was the most direct route to unlocking transformative capabilities.

Critically, OpenAI’s Law isn’t a physical inevitability—it’s a strategic decision. The belief that “more compute equals better AI” became a guiding principle, backed by massive investments, infrastructure buildouts, and partnerships with cloud providers.

The Scaling Hypothesis and the New Arms Race

Underpinning OpenAI’s Law is the scaling hypothesis: the idea that simply making models bigger and training them on more data with more compute leads to qualitatively better results. This hypothesis gained traction as each successive model—GPT-2, GPT-3, GPT-4—demonstrated leaps in fluency, reasoning, and multimodal understanding.

At the heart of this trend is an intense competition between tech companies to dominate the frontier of AI. The result has been a kind of arms race, where every new milestone requires exponentially more computational resources than the last.

Training large models now requires tens of thousands of high-end GPUs operating in parallel. Projections for future models involve compute budgets that could approach or exceed $100 billion, with massive power and infrastructure demands.

This trend has led to a new kind of exponential curve—one no longer defined by transistor counts, but by the willingness and ability to scale compute at all costs.

How It Compares: Huang’s Law, and Kurzweil’s Law of Accelerating Returns

To fully grasp the significance of OpenAI’s Law, it helps to explore other foundational frameworks that have shaped our understanding of technological progress beyond Moore’s Law.

Huang’s Law, named after NVIDIA CEO Jensen Huang, describes the observation that GPU performance for AI workloads has been improving at a rate significantly faster than Moore’s Law. Over a five-year period, GPUs have seen performance gains exceeding 25×, far outpacing the roughly 10× improvements expected under traditional transistor scaling.

This acceleration isn’t due to chip density alone—it’s the result of system-level innovation. Improvements in GPU architecture, increased memory bandwidth, high-speed interconnects, and advancements in software ecosystems such as CUDA and deep learning libraries have all contributed to these gains. Engineering optimizations in scheduling, tensor operations, and parallelism have also played a vital role.

Performance improvements in single-GPU inference and training tasks have reached up to 1,000× over the past decade, driven by this compounding stack of hardware and software innovation. In effect, GPU capability for AI tasks has been doubling every 6 to 12 months—three to four times faster than Moore’s original curve. This relentless pace has made GPUs the indispensable engines of modern AI, enabling the massive parallelized training runs that underpin OpenAI’s Law.

Kurzweil’s Law of Accelerating Returns takes the idea of exponential growth a step further—it proposes that the rate of exponential growth itself accelerates over time. According to this principle, each technological breakthrough doesn’t just stand alone; it creates the tools, platforms, and knowledge that make the next breakthrough happen faster and more efficiently. This leads to a compounding effect where technological change feeds on itself, accelerating in both scale and frequency.

Kurzweil has argued that this dynamic will compress what would have been centuries of progress into mere decades. If the rate of progress doubles every decade, the 21st century could experience an astonishing leap—equivalent to tens of thousands of years of advancement at historical rates.

This law is particularly relevant to AI. Modern AI is no longer just a subject of progress—it has become an accelerator of progress. AI systems are already assisting in designing new chips, optimizing neural networks, conducting scientific research, and even writing the very code used to build their successors. This creates a recursive improvement loop, where each generation of AI improves the next, shrinking development timelines and multiplying capabilities.

This feedback cycle begins to resemble what some call an intelligence explosion: a scenario in which AI systems become capable of rapidly improving themselves without human intervention. The result is a curve that doesn’t just rise steeply—it bends upward dramatically, as iteration cycles collapse and breakthroughs cascade. If this pattern continues, we may witness a phase of technological progress that feels almost instantaneous—where entire industries, scientific fields, and modes of thought evolve in months rather than decades.

OpenAI’s Law fits within this lineage as a demand-side expression of exponential growth. Unlike Moore’s or Huang’s Laws, which describe the pace of hardware improvements, OpenAI’s Law reflects how much compute researchers are actually choosing to consume in pursuit of better results. It shows that AI progress is no longer strictly bound by what chips can do, but rather by what researchers are willing—and able—to scale. Fueled by vast cloud infrastructure and billions in investment, OpenAI’s Law exemplifies a new era where capability grows not only through innovation, but through intentional, concentrated force.

Together, these laws sketch a multi-dimensional view of exponential growth. Moore and Huang define the supply of compute. Kurzweil maps the meta-trend of compounding progress. And OpenAI’s Law highlights a new kind of technological ambition—where pushing the limits is no longer optional, but the central strategy.

The Promise: Why Exponential AI Matters

The implications of OpenAI’s Law are profound.

On the optimistic side, exponential scaling has produced astonishing results. AI systems can now write essays, generate code, assist in scientific research, and engage in surprisingly fluid conversations. Each 10Ă— increase in scale seems to unlock new emergent abilities, suggesting we may be inching closer to AGI.

AI could soon transform industries ranging from education and healthcare to finance and materials science. If OpenAI’s Law continues to hold, we might witness breakthroughs that compress decades of innovation into a few short years.

This is the essence of a new term we’ve coined: “AI escape velocity”—the moment when AI begins to improve itself, propelling progress into a self-reinforcing, exponential surge.

The Price: Environmental, Economic, and Ethical Costs

But exponential growth doesn’t come free.

Training frontier models now consumes enormous amounts of electricity and water. Powering thousands of GPUs for weeks on end creates serious environmental concerns, including carbon emissions and thermal waste. The supply chains for AI chips are also under pressure, raising geopolitical and sustainability issues.

Financially, only the largest tech companies or well-funded startups can afford to stay on the curve. This leads to concentration of power, where a small group of organizations control the frontier of intelligence.

Ethically, OpenAI’s Law encourages a race mindset—bigger, faster, sooner—which can lead to premature deployment, untested systems, and safety shortcuts. There is growing concern that some frontier models may be released before society fully understands their impacts.

To mitigate this, researchers have proposed governance frameworks that track AI development not by what models do, but by how much compute was used to train them. Since compute is one of the best predictors of model capability, it could become a proxy for risk assessment and regulation.

Limits of Scaling: What Happens When the Curve Bends?

Despite the impressive gains, there’s debate about how long the scaling trend can continue. Some believe we’re already seeing diminishing returns: larger models consume more compute but yield only marginal improvements.

Others argue that breakthroughs in efficiency, algorithm design, or model architecture could flatten the curve without slowing progress. Smaller, smarter models might become more attractive than brute-force behemoths.

Moreover, public pressure, regulation, and infrastructure limitations may force the industry to rethink the “scale at all costs” mindset. If power grids, budgets, or social consent can’t keep up, exponential AI might hit a ceiling—or at least a turning point.

The Road Ahead: Charting the Future of Exponential AI

For now, OpenAI’s Law remains one of the clearest lenses through which to view the future of artificial intelligence. It explains how we’ve moved from rudimentary chatbots to multimodal generalist systems in less than a decade—and why the next wave of progress may be even more dramatic.

Yet, the law also comes with trade-offs: access inequality, rising costs, environmental burdens, and safety challenges. As we accelerate into this new era, society will need to confront fundamental questions:

  • Who gets to shape the future of AI?
  • How do we balance progress with caution?
  • What systems are needed to manage exponential capability before it outruns human control?

OpenAI’s Law is not immutable. Like Moore’s Law before it, it may eventually slow, plateau, or be replaced by a new paradigm. But for now, it serves as both a warning and a roadmap—reminding us that the future of AI is not just advancing, it’s compounding.

We’re not just witnessing history—we’re engineering it at exponential speed. But with that power comes a responsibility: to ensure that humanity doesn’t suffer exponential harm alongside exponential progress.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.