Thought Leaders
The Three Generations of Data Center Cooling—And Why Most Operators Are Building Yesterday’s Infrastructure

Three years ago, the data center industry debated whether liquid cooling would ever be necessary. Two years ago, most operators believed single-phase water would be the solution. Today, leading facilities are moving to next-generation cooling architectures, while many new builds are locking in systems that will be outdated within a few years.
This divergence is being driven by physics and processor roadmaps that are already visible through 2027. Together, they’re creating a split between operators who understand cooling is entering a new architectural era and those who may soon discover they’ve invested hundreds of millions in infrastructure that can’t support the next wave of AI processors.
The Three Generations of Cooling
Data center cooling has progressed through three distinct architectural eras, each defined by a new set of obstacles to be overcome and by the rack densities that require economic support.
-
Generation 1: Air Cooling (2000–2023): Peaked at 10–15kW per rack. Economics began breaking down around 2020 as AI workloads exceeded 20kW. By 2023, air cooling was largely obsolete for new high-density deployments.
-
Generation 2: Single-Phase Liquid (2020–2027): The initial liquid cooling approach. Uses water or PG25 at high flow rates to remove heat through temperature change. Viable from 20–120kW per rack but showing strain above 150kW. Expected to reach its practical limits by 2027 as processors surpass 2,000W.
-
Generation 3: Two-Phase + Advanced Heat Rejection (2024–2035+): Employs refrigerants that absorb heat through phase change rather than temperature change. Scalable from 150kW and well beyond per rack. Enables new heat-rejection strategies from chip to atmosphere. Already being deployed by leading operators and expected to dominate by 2027–2028.
Each transition marks a break point—when physics and economics hit their ceiling simultaneously.
Generation 2’s Physics Problem
First-wave Generation 2 deployments are beginning to reveal the limits of single-phase cooling.
Water-based systems require flow rates equal to roughly 1.5 liters per minute per kilowatt. A 120kW rack needs about 180 liters per minute; at 250kW, that jumps to 375 liters per minute through cold plates with orifices measured in millimeters.
At GTC this year, racks connected to lines the size of fire hoses made the challenge visible. High flow rates create cascading issues. Water mixed with glycol oxidizes microfinned structures, and corrosion is compounded by flow velocities that erode weakened fins. Maintenance demands have surprised many operators: monthly filter changes rather than quarterly or twice-yearly, constant chemistry monitoring, and glycol “IV bags” attached to racks.
Failure rates are just as concerning. Internal field data suggests roughly 4% of water-cooled GPUs fail over a three-year lifecycle due to leaks. With racks holding $3–5 million worth of equipment, that loss fundamentally breaks Generation 2’s economics.
A 10MW facility analysis by Jacobs Engineering highlights another inefficiency. Single-phase systems require colder water temperatures than Generation 3 systems. The colder water temperatures demanded by Generation 2 increase both chiller capacity requirements and energy consumption.
What Sets Generation 3 Apart
Generation 3 represents a true architectural shift. Two-phase refrigerants capture heat through phase change, reducing flow rates by a factor of four to nine. Reduced fluid velocity significantly reduces infrastructure stress, minimizes cold plate erosion, and eliminates much of the maintenance burden that plagues Generation 2.
Refrigerants also enable new heat-rejection designs—such as refrigerant-to-CO₂ and refrigerant-to-refrigerant systems—that optimizes cooling from the chip to the atmosphere. These designs are already in production, demonstrating Generation 3’s scalability and economic efficiency.
When Jacobs Engineering—responsible for more than 80% of global data center MEP designs—created side-by-side 10MW reference models, they removed vendor bias from the comparison.
Findings:
-
CapEx: $10.39M single-phase vs. $10.38M two-phase
-
Annual OpEx: $1.04M vs. $679K (35% reduction)
-
Five-Year TCO: $15.6M vs. $13.8M (12% savings)
The CapEx parity surprised many who expected a premium for two-phase. Current two-phase systems require more CDUs, but single-phase designs need complex row manifolds, robust leak detection, and harmonic filtration—complexities avoided with current two-phase CDUs. Next-generation CDUs arriving in 2026 will further reduce costs, making Generation 3 even more economical to deploy.
The OpEx advantage stems from thermodynamics. Two-phase systems maintain identical chip temperatures while using warmer facility water—about 8°C higher on average. Each degree saved cuts annual energy use by roughly 4%, translating to the 35% OpEx reduction Jacobs documented across climates from Phoenix to Stockholm.
Forward-thinking operators are going a step further, converting that thermal margin into about 5% more compute capacity within the same power envelope. In a world where every GPU represents revenue and power is constrained, that advantage becomes a competitive differentiator.
The Silicon Roadmap Forces the Issue
The shift to Generation 3 isn’t being driven by cooling vendors—it’s dictated by processor design.
NVIDIA’s Rubin architectures are expected to exceed 2,000W per processor. AMD’s MI450 is on a similar trajectory. Every major chipmaker is packing more performance into smaller footprints, driving thermal density sharply upward.
The key challenge is heat flux—the concentration of heat measured in watts per square centimeter. As heat flux rises, Generation 2 solutions hit physical and economic limits. Flow rates become destructive, temperature deltas untenable, and system costs unsustainable.
Generation 3 was built for this reality. Leading operators are already specifying 250kW racks with clear paths to 1MW+. Waiting to “see what wins” may feel conservative, but it’s the riskiest approach. The silicon roadmap is fixed; physics won’t bend. The only decision left is when to act.
The Brownfield Dilemma
Billions are being invested right now in Generation 2 infrastructure that will be constrained within 36 months. Facilities designed today around single-phase water will struggle to support 2027-class processors. Retrofitting later costs far more than building with Generation 3 today.
For existing sites, refrigerant-to-air systems can serve as a bridge, but they aren’t a long-term solution. The industry’s direction is clear: Generation 3 architectures will anchor the next decade of new builds.
A Generational Choice
Every cooling transition has looked sufficient until the next generation made it obsolete. Operators who embraced liquid cooling early—adopting it in 2020–2021 instead of 2023—gained nearly two years of deployment advantage.
The same inflection is underway again. The physics are proven. The economics are validated by independent analysis. Processor roadmaps make the transition inevitable.
The question isn’t whether the change will happen—it’s whether you’ll lead it or be forced into it once Generation 2 reaches its limits.
Data centers designed today will operate well into the 2030s. Building with Generation 3 architectures ensures they remain viable for the AI era rather than becoming constrained assets before they even stabilize.
The future of data center cooling is a generational transformation—and Generation 3 is already here.












