3 Critical Considerations for Future Liquid Cooling Deployments

Ben Sutton of CoolIT Systems, explains how to engineer server, rack and facility level solutions to stay ahead of rising cooling requirements.
Nov. 20, 2025
4 min read

The debate is over.  Liquid cooling isn’t optional; it’s inevitable. AI workloads have shattered the limits of air cooling. Now the question is not if but how to deploy liquid cooling most effectively.

AI-driven rack densities have rewritten the rules of computer systems and, with them, data center design, pushing traditional cooling to its limits. To keep pace, operators must rethink their entire approach to thermal management.

This article examines the issue across three levels: server, rack, and facility, focusing on how to engineer solutions that stay ahead of rising cooling requirements.

  1. Server Level: Heat Capture

AI accelerators – GPUs and specialized ASICs - are driving the transition to liquid cooling. Managing heat flux and thermal design power (TDP) of these accelerators is a key part of server design. In 2023, AI accelerators’ heat flux values exceeded 50W/cm² and TDPs exceeded 500W. Both proved to be factors that required liquid cooling. Today TDP is well above 1kW per chip and, while heat density is not rising quite as fast, it is still on the rise.

Peripheral components are, also, becoming a thermal challenge. As rack densities rise, the typical hybrid configurations with coldplates on the processors and air cooling for the peripheral components is becoming less feasible.

The graph below shows that if a 500kW has 90% of the heat captured by the coldplates, there still remains 50kW to dissipate from other components. This is a stretch for even the best air-cooling setups. Coldplates are evolving to capture heat from all components, not just the processor.

To keep pace the industry is now targeting >95% heat capture through liquid. It is expected that more designs will emerge resembling the full-heat capture coldplate loop illustrated below. CoolIT has long been producing near 100% heat capture direct liquid cooled coldplate designs for several generations of supercomputer systems.

  1. Rack & Row Level: Flow Rates

Liquid cooling infrastructure must scale with AI. That means planning for larger pipe diameters and centralized CDUs. Flow rates of 1.0–1.5 LPM/kW are becoming the industry standard. This will drive the need for larger, lower pressure drop quick disconnects. Initially this will drive a move from 1” to 2” and soon enough it will push beyond that. Rack-level connectors must be sized to handle future flow demands while minimizing pressure drop.

As the industry moves towards these larger skidded CDUs, pipe diameters will also rise. This is a key consideration for data center build outs as larger diameter piping will provide future proofing for data centers. The image below shows how this pipe sizing could evolve.

  1. Facility Level: Multi-MW CDUs

CDUs are evolving rapidly. What began as rack-based units (<100kW) has scaled to row-based MW units, like CoolIT’s CHx2000. These are more than sufficient for the deployments today but moving towards the future units are likely to become larger facility level units, that will be moved outside the white space and will be deployed similar to how power infrastructure is.

System design must prioritize redundancy to maintain uptime, and it is likely that this will be managed at the system or facility level. CDUs must maintain performance through transient events and respond quicky to changing AI workloads. Both the flow and pressure from the CDU must be maximized in order to future proof infrastructure.

3 Things Operators Must Plan For

To implement liquid cooling successfully at scale, operators should:

  1. Choose pipe diameters wisely. Plan for future rack densities. Undersized pipes will become bottlenecks.
  2. Consider future CDU form factors. Move beyond rack-based CDUs. Facility-level CDUs offer scalability and easier maintenance.
  3. Plan for 100% heat capture. Partial air cooling will soon not be viable. Full liquid heat capture will become the new standard.

Looking Ahead: A Liquid-First Future

AI is reshaping the data center landscape. Liquid cooling is no longer a niche - it’s the foundation of scalable, high-performance computing infrastructure. Single-phase liquid cooling has become the defacto standard for adopting high-density AI racks and will prove its value as the scale of liquid cooling evolves – just like it did with air.

About the Author

Ben Sutton

Ben Sutton

Ben Sutton is a Product Marketing Manager at CoolIT Systems, supporting product positioning for liquid cooling solutions across high-performance computing (HPC), AI data centers and enterprise environments worldwide.

CoolIT Systems specializes in scalable liquid cooling solutions for the world’s most demanding computing environments, partnering with global processor and server design leaders to develop the most efficient and reliable liquid cooling solutions for their leading-edge products. 

Sign up for our eNewsletters
Get the latest news and updates
Traka
Source: Traka
Sponsored
Craig Newell of Traka explains why data center operators should prioritize security systems that can be easily integrated, rather than utilizing separate solutions and training...
Tirachard Kumtanom/Shutterstock.com
Source: Tirachard Kumtanom/Shutterstock.com
Sponsored
Philip Tappe of Modius explores how modern DCIM solutions can optimize data center efficiency and PUE with real-time data, AI-driven analytics, and seamless integration. He shares...