Liquid Cooling Comes to a Boil: Tracking Data Center Investment, Innovation, and Infrastructure at the 2025 Midpoint
The rise of AI workloads is accelerating a major transition in data center design: the move from air to liquid cooling. At the halfway mark of 2025, that shift is no longer experimental—it’s operational. It is strategic, fully capitalized, and embedded in the infrastructure roadmaps of the industry’s most ambitious players. This transformation is being driven by twin forces: the rise of GPU-powered AI workloads and the physical limits of traditional air-cooled systems.
Recent developments from Aligned Data Centers, Colovore, Iceotope and Microsoft underscore the maturation of liquid cooling into a pillar of modern data center design. With massive investments, pioneering partnerships, and cutting-edge scientific breakthroughs, these organizations are not only scaling cooling capacity—they are rethinking the very chemistry, physics, and engineering behind sustainable AI infrastructure.
Colovore and the Capital Surge Behind the Liquid Edge
In many ways, Colovore’s story encapsulates the moment. Backed by King Street Capital Management, Colovore recently secured a $925 million debt facility through Blackstone funds to accelerate its expansion across U.S. metro markets. The company, headquartered in Silicon Valley, has spent over a decade honing its liquid-cooled, ultra-dense data center model—quietly preparing for the AI moment that has now arrived.
Colovore’s platform is purpose-built for high-performance compute (HPC) and AI inference workloads, offering per-rack cooling densities of up to 200kW. As generative AI transitions from training in hyperscale clouds to inference at the enterprise edge, Colovore’s ability to deploy scalable, liquid-cooled environments in proximity to population centers makes it uniquely positioned for growth.
President and Co-Founder Sean Holzknecht puts it plainly: “We remain focused on building the national backbone for this next phase—scalable, liquid-cooled data center platforms purpose-built for edge and core inference.”
Backed by King Street’s $28 billion capital base and a strategic roadmap targeting Reno, Chicago, and Austin, Colovore represents the convergence of financial heft and engineering specialization. Their deep expertise in immersion and direct-to-chip liquid cooling is no longer a niche differentiator—it’s becoming a required capability for servicing NVIDIA’s highest-density accelerators and the enterprise AI ecosystems they power.
Aligned and Lambda: Pairing Liquid-Cooled Infrastructure with GPU Cloud Demand
While Colovore’s approach begins with hardware density and edge reach, Aligned Data Centers is approaching the liquid cooling evolution from a hyperscale-adaptive design philosophy. The company’s latest partnership with Lambda, the AI Developer Cloud, will see Lambda occupy the entirety of Aligned’s forthcoming DFW-04 facility in Plano, Texas—a purpose-built, liquid-cooled campus designed to support the most demanding GPU workloads.
Lambda, known for its close ties to NVIDIA and elite developer clientele, chose Aligned because of its proven track record in adaptive infrastructure and rapid delivery. The DFW-04 facility will support deployments accelerated by NVIDIA Blackwell and Blackwell Ultra architectures—hardware with power and heat characteristics that cannot be effectively cooled with traditional methods.
Aligned’s own innovation stack includes its DeltaFlow™ liquid cooling system, a patent-pending design aimed at delivering performance, sustainability, and interoperability across high-density workloads. This complements its modular AMI (Adaptive Modular Infrastructure) platform, enabling customers to transition from air to hybrid or fully liquid cooling without facility-level redesigns.
“Deploying AI at scale is no easy feat,” said Ken Patchett, Lambda’s VP of Data Center Infrastructure. “Aligned’s ability to rapidly deliver AI-ready infrastructure, along with its passion for supporting customers, is instrumental to meet the aggressive scale, quality and speed standards Lambda sets for its deployments.”
In the broader context, Aligned’s partnership with Lambda is notable for aligning infrastructure, cloud platform, and chip ecosystem into a cohesive growth strategy. It is emblematic of how liquid cooling is moving upstream—not just in technical execution, but in deal structuring, customer targeting, and long-term capital investment.
Microsoft: Engineering for the AI Era with Zero-Waste Cooling and AI-Powered R&D
If Colovore and Aligned highlight the infrastructure and investment frontiers of liquid cooling, Microsoft exemplifies the engineering and scientific ambition now driving the category. At its Build 2025 conference, Microsoft announced that all new data center designs going forward will utilize zero-waste water cooling systems—an ambitious commitment underpinned by real deployments.
As part of its transition to NVIDIA GB200-powered systems, Microsoft showcased closed-loop recirculation cooling infrastructure augmented by massive outdoor cooling banks—each equipped with 20-foot fans to thermally manage the circulating liquid without requiring ongoing water input. This marks a significant evolution from legacy evaporative systems that consumed large volumes of water, often in arid regions.
“Each of these 80 units includes a 20-foot fan that blows cool air on the liquid as it circulates,” said Scott Guthrie, Microsoft EVP for Cloud and AI. “All new Microsoft data center designs going forward will use a zero-waste water cooling method.”
But perhaps the most compelling glimpse into the future came not from mechanical engineering, but from Microsoft’s use of agentic AI in coolant discovery. In a blog post timed with the Build conference, Microsoft introduced Discovery, a new AI-driven R&D platform that enables collaborative scientific innovation through specialized agents and graph-based reasoning. Using this system, Microsoft’s researchers identified a promising PFAS-free coolant alternative in just 200 hours—a process that typically would take months or years.
This is not merely incremental efficiency. It represents a paradigm shift in how materials for next-generation cooling systems are discovered, evaluated, and deployed—putting AI at the center of sustainable infrastructure design.
Immersion Alliance: Engineered Fluids, Iceotope and Juniper Networks Join Forces for Scalable AI Cooling
As liquid cooling moves from niche to necessity in the face of surging AI workloads, a new triad of industry innovators is advancing the state of the art. Engineered Fluids, Iceotope, and Juniper Networks have formed a strategic partnership aimed at delivering scalable, sustainable, and performance-optimized infrastructure for high-density AI and HPC environments.
The collaboration merges Engineered Fluids’ single-phase dielectric immersion coolants with Iceotope’s chassis-level liquid cooling platforms and Juniper’s AI-native QFX Series network switches—creating a tightly integrated, thermally resilient data center solution designed for the most compute-intensive workloads. Together, the three companies are tackling the pressing demands of AI infrastructure: compute density, energy efficiency, network reliability, and environmental impact.
“Cooling is no longer a supporting role—it’s a strategic advantage,” said Lars Heeg, VP of Europe at Engineered Fluids. “By integrating precision immersion systems with performance networking, we’re helping operators future-proof against rising thermal and sustainability demands.”
Iceotope’s sealed chassis delivers liquid coolant directly to the hottest components—CPUs, GPUs, and memory—allowing for uniform, efficient heat transfer while protecting gear from dust, humidity, and thermal shock. Meanwhile, Juniper’s high-throughput QFX switches provide the low-latency, high-reliability networking required to keep accelerated compute pipelines running smoothly in dense configurations.
The solution stands out for its modularity and ease of deployment. Iceotope’s form factor-agnostic chassis integrates into standard racks and edge enclosures, while the immersion-ready architecture allows operators to lower cooling-related energy use by as much as 50%, reclaim heat for secondary uses, and reduce or eliminate water consumption—key goals for ESG-conscious operators.
“AI is changing the thermal dynamic of every data center,” said Ian Ferguson, Director of Sales EMEA at Iceotope. “This collaboration enables rack-level efficiency without compromising scale or serviceability.”
For Juniper, the partnership reflects a broader strategy to embed AI-native intelligence and sustainability into core networking infrastructure. “Scalable AI workloads demand infrastructure that’s both fast and efficient,” said Manfred Felsberg, Senior Director at Juniper Networks. “With this alliance, we’re pushing performance and sustainability hand-in-hand.”
The announcement lands as data center operators continue to experiment with new cooling modalities to keep pace with the power density of modern AI training clusters. It also positions immersion cooling not just as a thermal solution, but as a system-wide optimization strategy—touching servers, switches, and sustainability metrics alike.
With deployments expected to ramp across both hyperscale and edge environments, the Engineered Fluids–Iceotope–Juniper partnership adds momentum to the case for immersion cooling as a mainstream pathway in the AI era.
A Turning Point for Data Center Cooling
Taken together, these announcements suggest that 2025 is the year when liquid cooling tipped from bleeding-edge to baseline. No longer limited to boutique deployments or experimental designs, liquid cooling is now a critical enabler for the AI era, supported by:
- Capital Scale: Colovore’s $925 million debt raise and Aligned’s ongoing hyperscale buildouts demonstrate that investors now view liquid-cooled infrastructure as core to the digital economy.
- Strategic Partnerships: Collaborations like Lambda + Aligned and NVIDIA + Colovore reveal a new layer of vertical integration between hardware platforms, cloud delivery models, and facility design.
- Sustainability Leadership: Microsoft’s closed-loop, zero-waste systems—and its AI-discovered PFAS alternative—push the boundaries of environmental stewardship in the data center space.
- Technology Convergence: The partnership between Iceotope, Engineered Fluids, and Juniper Networks exemplifies how immersion cooling is merging with AI-native networking to deliver end-to-end efficiency gains. By combining sealed-chassis liquid cooling, dielectric fluids, and high-performance switching fabric, the collaboration highlights how precision-engineered thermal and network solutions are co-evolving to support dense, sustainable AI workloads at scale.
As compute density continues to rise with the deployment of systems like NVIDIA Blackwell, GB200, and emerging custom AI silicon, the constraints of air cooling become increasingly untenable. Liquid cooling offers a path forward—not just to cool hotter chips, but to rethink how data centers are built, operated, and even imagined.
And with AI driving demand for scale, speed, and sustainability, the liquid cooling revolution is only just beginning.
Data Center Liquid Cooling in 2025: More Major Developments and Industry Initiatives
As artificial intelligence workloads continue to escalate, the data center industry appears to be witnessing a tectonic shift from traditional air cooling to advanced liquid cooling solutions. The transition is of course being driven by the need for higher efficiency, sustainability, and the ability to handle increased thermal loads. Below is a roundup of notable liquid cooling advancements and initiatives demonstrating this trend:
Hyperscale Innovations
Microsoft, Google, and Meta: These tech giants are collaborating on the Mt. Diablo initiative, aiming to standardize 400VDC power distribution in data centers. This move supports the deployment of 1MW liquid-cooled racks, enhancing compute density and power efficiency by approximately 3% .
Google: Within itself, the company has implemented its liquid-cooled Tensor Processing Unit (TPU) pods, achieving a fourfold increase in compute density and improved reliability. Google’s latest generation of liquid-cooled TPU pods represents a significant leap in infrastructure design for AI training workloads. First deployed in 2023 and scaled through 2024–2025, these pods use direct-to-chip liquid cooling to manage the thermal demands of TPUs running large language models and generative AI applications. According to Google, the adoption of liquid cooling has enabled it to quadruple compute density within existing data center footprints, while maintaining system reliability and minimizing energy overhead.
These TPU v5p pods power a variety of internal services and are also available to cloud customers via Google Cloud’s AI Hypercomputer, a tightly integrated stack of performance-optimized infrastructure that also includes custom interconnects and advanced networking. In a recent blog post, Google Cloud described the architecture as "purpose-built for large-scale AI training and inference workloads," crediting liquid cooling as a key enabler of performance and energy efficiency gains.
Strategic Partnerships and Investments
Supermicro and Fujitsu: The companies have announced a strategic collaboration to develop liquid-cooled systems optimized for high-performance computing (HPC) and AI workloads. The partnership focuses on energy-efficient designs and aims to minimize the environmental impact of data centers.
Fujitsu and Supermicro are joining forces in a long-term strategic collaboration aimed at advancing green AI infrastructure with a strong emphasis on liquid cooling and workload-optimized compute. The partnership will center around developing next-generation data center platforms built on Fujitsu’s forthcoming Arm-based “FUJITSU-MONAKA” processor, targeting high performance and power efficiency for generative AI and HPC workloads. Supermicro, already a frontrunner in deploying liquid cooling at scale, will contribute its modular Building Block Solutions® approach to server design, enabling rapid deployment of rack-scale, liquid-cooled systems. A shared focus on “plug-and-play” liquid cooling and low-power architectures reflects both companies’ commitment to reducing environmental impact amid surging demand for dense AI training infrastructure.
The collaboration also aims to scale globally, pairing Fujitsu’s processor innovation and platform services with Supermicro’s agile GPU server manufacturing and integration capabilities. Supermicro CEO Charles Liang described the joint systems as “power efficient and cost-optimized,” designed for deployment across cloud, on-prem, and edge environments. Fujitsu’s NEDO-backed 2nm MONAKA chip—slated for release in 2027—underscores Japan’s ambitions in green computing leadership, and will be supported by Fujitsu’s subsidiary Fsas Technologies to offer generative AI solution stacks worldwide. With ambitions that span silicon to system, the Fujitsu-Supermicro alliance could emerge as a key player in shaping the architecture of liquid-cooled, AI-first data centers.
ZutaCore and Carrier: Carrier Global has made a strategic investment in ZutaCore, a specialist in waterless, direct-to-chip liquid cooling, underscoring the accelerating shift toward advanced thermal management for AI and high-density computing. Through its venture arm, Carrier Ventures, the company is deepening its play in the data center sector by aligning with ZutaCore’s HyperCool technology—a two-phase, waterless cooling system capable of enabling 100% heat reuse and significant energy savings.
This partnership reinforces Carrier’s commitment to building a full-stack portfolio for AI-driven infrastructure, following its recent launch of the Carrier QuantumLeap thermal platform. With global data center cooling demand expected to surge past $20 billion by 2029—and liquid cooling forecasted to grow at 39% annually—Carrier is positioning itself as a key integrator of next-gen, energy-efficient climate solutions.
For ZutaCore, the deal represents a validation of its vision to scale sustainable liquid cooling technologies for the world’s most demanding compute environments. With Carrier’s global reach and technical resources, the two companies aim to deliver integrated, low-emission cooling infrastructure built for the AI era. ZutaCore’s HyperCool platform enables direct-to-chip cooling without the use of water, a key differentiator as the industry pushes for both energy efficiency and water conservation.
Submer: Has introduced autonomous robots for immersion cooling tanks, simplifying maintenance and server handling.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT4.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.