Thought Leaders
How AI Can Go Bust and Survive, Just Like the Internet Did

Ongoing dramatic market swings among AI-driven tech giants, with public companies losing more than $1 trillion in valuation in less than a month, clearly illustrate that valuations are disconnected from fundamentals. However the real question to ask right now is not when the bubble will burst, but how the industry can responsibly deflate the bubble and prepare for the AI of the future.
Over the last few years, AI has become synonymous with big valuations, unlimited scalability, and a feeling that no one can compete with the biggest players. But the technical reality has shifted and points to a different type of future for AI: The real money is not in enormously expensive AI models that are one day going to pay off in outsized returns. Increasingly, the value of AI will be in how it is integrated and used to make money for businesses, keeping in mind that limit-pushing frontier AI models should be getting cheaper, not more expensive. The singularity myth is over. Scale alone is no longer delivering step-function gains. Execution, distribution, and ecosystem now matter more than raw model size.
Adjusting expectations to this new reality will allow the growing AI bubble to slowly deflate, rather than burst and wreak havoc on the economy and financial markets like the dotcom bust did a quarter century ago.
In the 90s, the tech industry assumed the internet could and would do everything; and that anything built on the internet would, by nature, succeed. They were wrong and the bubble did indeed burst – but the internet survived. The crash highlighted how online success was not just about the underlying tech – the internet – but about the ability to develop smart and effective use-cases, products and hardware. The internet did not win on protocols alone. It won when browsers, content delivery networks, and developer ecosystems made it usable.
Amazon survived, and still thrives, while Pets.com failed because it never had a profitable way to handle shipping out its dogfood, a challenge overlooked by the tantalizing idea that it would be able to have customers all over the country due to the advent of the internet.
That is exactly where Big AI is today, absorbed in dreams and expectations about the technology’s future potential. There is no question that it is the most amazing technology we have today. But AI models are just the underlying tech, not the answers themselves, and surely not where the money and value will remain. In fact transformer and diffusion architectures, which underlie most generative AI, are public; optimization frameworks are open; compute power is increasingly accessible. The barrier is no longer theoretical know-how. It is the craft of building reliable systems and integrating them into existing creative and production pipelines that will determine who succeeds. These products and services also no longer require investors to front trillions of dollars. I know this from my own experience. Our team in Jerusalem built an open-source audio-video model for making AI videos at roughly one tenth the cost of those made by market leaders, and that generates longer continuous scenes, often with higher resolution and speed. This was achieved on about $100 million, not billions. Our story shows that modern AI progress is less about secret sauce and more about disciplined engineering.
Like with the internet, those who survive will be those that harness AI for the best use-cases, hardware applications, products, and services. It is true that what exactly those will be is hard to predict. Afterall, back in the early 90s when people were using AOL or Prodigy, no one could have imagined Gmail.
However, absent the power of clairvoyance, there are smart questions to ask along the way in order to guide the AI industry and its investors to work in a way that will both slowly and gradually deflate the bubble, while simultaneously building out the economy of the future.
Investors, including VCs and the pension funds pouring money into AI companies, need to ask what value, exactly, is being created. Billions of dollars were poured into research at the big tech companies to build AI that was in the end easily replicated in other places. Massive AI budgets don’t guarantee unique intellectual property, user lock-in or defensible economics anymore. Now investors need to evaluate how companies build, optimize, and integrate models into real workflows of customers, creating actual products and services. Investors should ask for metrics like economics by workload when looking at AI applications.
These, not simply the talents or proprietary nature of the model itself, are the key elements of value. It is also important to understand the value of open source models. These often out-iterate closed APIs because researchers and developers can adapt them locally. That adoption compounds into a moat around a company or product, helping to guarantee profits and success
Both investors and entrepreneurs concerned about efficient use of capital need to step back and evaluate the real cost of the AI and all related components; these are often inflated and higher than they need to be. The general approach should be that hardware costs are volatile, so AI design should not be dependent on any specific device or hardware. The value and what differentiates a company is its throughput per dollar, not vendor discounts favoring a certain type of hardware. The defensibility of AI spending now lies in infrastructure optimization, proprietary data, and integration depth. Entrepreneurs with good ideas for solutions that carefully craft or use models with that end-performance in mind will win out over those that seek massive models that can later be scaled for different potential uses. Another plus is offering open deployment options for studios and platforms that cannot depend on a remote API for real-time experiences.
Policymakers and the industry also need to think more logically about regulation. Progress has been slow in these areas and focuses heavily on frontier models run on big devices; this is no longer a practical approach. The momentum is overwhelmingly toward such models being able to run on consumer devices, making regulation of models themselves impossible. The open source nature of many models presents another formidable challenge to the current approach to regulation. Once again, the right approach is to focus on deployment via applications and products, and develop regulatory frameworks around those for various industries, not overarching policies about models. The goal should be to regulate applications and sectors, with standards for provenance, safety railings in products, and disclosure for synthetic media. History from the 90s and early 2000s once again holds a wise lesson on this concept: The case against the popular music file-sharing company Napster did not limit file-sharing per se — that technology only grew and became much faster, eventually giving way to streaming – but rather focused on a platform’s responsible deployment of the technology. (Even through bankruptcy, Napster actually managed to hang on as a brand by adjusting how it deployed its technology and was purchased for more $200 million earlier this year.)
The bottom line is that the market will consolidate around a few unified multimodal AI models that can be distilled for efficiency and adapted for different uses. All stakeholders need to be paying much more attention to applications and the actual business value that AI can bring, and not losing themselves in the promises of the models themselves. The industry is inflating faster than it is creating value. Whether this ends in a dramatic correction – similar to the early internet bubble – is open to debate. But clarity now means resilience later.










