Connect with us

Thought Leaders

The AI Reckoning: Why Infrastructure Matters Most

mm
A professional technician in a modern data center inspecting a server rack with a tablet, showing a contrast between older server cabinets and new, high-density AI infrastructure.

AI is the most consequential technology of our lifetimes, and we’re approaching a major inflection point that will redraw the business landscape.

Adoption is surging, with 78% of enterprises deploying AI in 2025 and market projections of $1.81 trillion by 2030. Yet behind that growth lies a harder truth: many enterprises are struggling to translate AI into real, scalable and tangible outcomes. It’s becoming clear that many are adopting AI without the operational changes required to run it at scale and for full value.

At the same time, the infrastructure underpinning AI is not keeping pace with the growth needed. Organizations and models are still constrained by available GPU compute, while available data center capacity is at record lows across the world. New AI capacity is constrained by power availability, build timelines and labor shortages.

This is the AI reckoning – a divide between those building for and adopting AI at the pace needed and those constrained by conservative legacy models. By 2035, it’s possible that this gap could claim half of today’s companies. The race is on: adapt or die.

Delivering on AI’s promise

After years of headline-grabbing, multi-gigawatt announcements, organizations will finally confront a mark-to-market truth test this year. Who is truly delivering versus who is just relying on headlines and press releases to be part of the conversation.

The difference between narrative and execution will become clearer, especially as AI ROI is a real boardroom focus today. The winners will be the organizations that can bring the full stack together, i.e., GPU supply, power, capital and a resilient supply chain, and prove it in operations and revenue, not just in marketing. Those who deliver will accelerate rapidly and emerge as the credible long-term leaders. Those anchored in creative announcements will fall behind. And the gap will continue to widen between the two.

The limiting factors

The rules of computing have fundamentally changed. Since 2019, the computing power behind AI models has doubled roughly every 10 months. The advent of Gen AI has accelerated growth, as hardware lifecycles have compressed and NVIDIA’s extreme co-design has set a pace that will only get faster. Yet most data centers remain architected for legacy workloads, not the power density, cooling demands and traffic patterns of modern GPU compute.

Traditional approaches will not keep pace with AI-driven change. Trying to run AI workloads in legacy environments is like plugging a rapidly improving Formula 1 engine into a family car; the chassis simply isn’t built to handle the performance and the change. And by the time a traditionally built data center comes online, the hardware has already evolved beyond its design parameters.

Across the industry, with billions invested in traditional infrastructure, this creates an uncomfortable reality. Either absorb the cost of rebuilding, hope the older chips remain valuable, or fall steadily behind those that designed for changing AI from the outset. Importantly, retrofitting is difficult. Progress requires purpose-built infrastructure, including direct-to-chip liquid cooling, high-bandwidth networking and redesigned power systems.

Building for constant change

The solution to this problem requires an entirely new approach to infrastructure, one that is already gaining momentum. The industry is shifting towards flexible, standardized units that can be deployed, upgraded and replaced in sections as requirements evolve. Rather than building fixed facilities optimized for a point in time, operators are increasingly deploying capacity in phases, adding higher-density segments as chip architectures and power requirements change.

This more flexible approach can now deliver GPU-optimized capacity in months rather than years. Offsite manufacturing and standardized components enable systems to be built and tested in controlled environments, accelerating deployment and reducing on-site complexity and skilled labor needed. Crucially, upgrades can be performed while the rest of the site remains operational, and decommissioned sections can be refurbished and redeployed, extending lifespan while reducing waste and maximizing revenue.

Adaptability is vital in an environment where performance requirements evolve faster than traditional data center lifecycles. Flexibility is now the defining requirement over traditional rigidity we’re used to in legacy builds.

The reckoning is already here

The AI reckoning is no longer a future scenario; it is unfolding in real time. The separation between those data centers engineered for continuous change and those constrained by legacy assumptions is already visible, and it will accelerate from here. This is not simply a technology cycle; it is a structural reset of how infrastructure is conceived, financed and delivered. The organizations that embrace adaptability, align the full stack and execute at pace will define the next decade. The rest will not just fall behind. They will become irrelevant.

Harqs Singh, Chief Technology Officer and Co-Founder of InfraPartners, leads the company's development of AI data centers built using advanced offsite manufacturing. Previously, the COO of Technology and Data & AI at BlackRock, Harqs has deep expertise in digital infrastructure, AI and sustainability across global platforms. His experience across diverse sectors enriches his approach and drives him to promote innovative business models and industry transformation.

Harqs is recognized for driving innovation across the sector and has played an active role in shaping industry best practices and building standards like the Data Center Maturity Model.