Funding
Iceotope Secures $26M Series B as AI Infrastructure Pushes Cooling Systems to Their Limits

UK-based liquid cooling specialist Iceotope has raised $26 million in a Series B funding round as demand for AI infrastructure continues to strain traditional data center cooling methods.
The round was led by Barclays Climate Ventures and Two Seas Capital, with participation from existing investors including Edinv, ABC Impact, Northern Gritstone, and British Business Bank.
The company said the new capital will be used to expand engineering and product development, grow its patent portfolio, and deepen partnerships across the AI infrastructure ecosystem. The funding arrives at a pivotal moment for the industry, as increasingly power-hungry AI accelerators and GPU clusters push rack densities toward levels that conventional air cooling systems struggle to handle.
AI’s Growth Is Creating a Thermal Problem
The rapid expansion of generative AI has created an infrastructure challenge that extends far beyond computing power alone. Modern AI servers consume enormous amounts of electricity, and the heat generated by dense GPU deployments has become one of the most significant bottlenecks in scaling AI data centers.
Industry researchers at SemiAnalysis project that liquid-cooled AI accelerator capacity could grow from roughly 3GW to 40GW within two years as hyperscalers and colocation providers scale AI deployments.
Iceotope believes conventional cooling architectures are nearing their practical limits. While direct-to-chip liquid cooling has gained traction, the company argues that cooling only processors is no longer sufficient for next-generation AI systems, where memory, storage, networking, and power delivery components also generate substantial heat loads.
That challenge becomes even more pronounced outside of hyperscale data centers. As AI workloads increasingly move toward enterprise environments and edge deployments, organizations face the problem of operating high-performance systems in locations that lack specialized cooling infrastructure.
A Different Approach to Liquid Cooling
Founded in 2005, Iceotope began as a research-focused “green computing” venture before evolving into a specialist in precision liquid cooling for AI infrastructure, HPC environments, and edge computing.
Rather than relying solely on cold plates attached to processors, Iceotope uses what it calls a “direct-to-everything” cooling approach. Its systems circulate non-conductive dielectric fluid through sealed chassis designs that cool all major heat-producing components inside the server.
The company says this design allows infrastructure to run more efficiently while reducing water consumption and lowering overall energy usage compared to traditional air cooling systems. Iceotope also emphasizes that its cooling systems are designed to operate in a wide range of environments, including enterprise deployments, industrial settings, and edge locations where thermal management is particularly difficult.
According to the company, its technology can reduce energy usage by up to 40% and water consumption by up to 96% compared to conventional cooling methods.
Patents and Ecosystem Partnerships
A major part of Iceotope’s strategy revolves around intellectual property and ecosystem integration. The company recently announced that it surpassed 200 granted and pending patents tied to liquid cooling technologies, including chassis architecture, dielectric fluid systems, and rack-scale thermal management.
Iceotope has also been building partnerships with hardware manufacturers, hyperscalers, and infrastructure providers. Its technology has been showcased alongside systems from companies such as Intel, HPE, and Giga Computing in recent years.
The broader AI infrastructure market is increasingly focused on sustainability as well as performance. Cooling already accounts for a significant share of data center energy consumption, and operators are under pressure to reduce both power usage and water requirements as AI deployments scale globally.
Cooling Becomes Foundational to the Future of AI Infrastructure
As AI systems continue to scale, thermal management is increasingly becoming one of the defining engineering constraints of modern computing. Future AI clusters are expected to consume dramatically more power than traditional enterprise infrastructure, forcing the industry to reconsider how servers, networking equipment, and accelerators are physically designed and deployed.
This shift could have implications far beyond hyperscale data centers. Advanced cooling technologies may eventually influence where AI systems can operate, enabling high-density compute in environments that were previously impractical due to heat, noise, or power limitations. That includes industrial sites, hospitals, telecom infrastructure, defense environments, and edge deployments where conventional cooling systems are difficult to maintain.
The transition may also reshape the economics of AI infrastructure itself. As energy consumption rises alongside AI adoption, efficiency improvements in cooling could become increasingly important for controlling operational costs, reducing water usage, and meeting environmental targets. Over time, thermal management may evolve from a backend engineering problem into a major competitive factor influencing how and where AI services are delivered.












