For AI and HPC, Data Center Liquid Cooling Is Now

April 5, 2024
No longer on the horizon, liquid cooling technology moves front and center thanks to NVIDIA and massive data center power demands.

Here at Data Center Frontier, we’ve been talking about the need for, and the issues with, liquid cooling and support for high rack densities for quite a while.

While no one has really argued that either was unnecessary, much of the discussion has revolved around whether both were strictly niche needs, and still a ways down the pike.

Both of those are reasonable points. Much of what goes in at large-scale data centers is relatively stable. What has worked last year will continue to work next year, and major changes aren’t going to happen outside of normal hardware and facility replacement cycles.

That being said, the rapidly growing niche market surrounding AI is making people rethink their development plans.

In many cases the ability to support AI and HPC operations in the data center is no longer an add-on, but an integral part of data center planning. And being able to support liquid cooling solutions means getting a clear understanding of what’s happening right now.

The View from NVIDIA GTC

At NVIDIA’s GTC conference in March 2024, one of the hardware solutions announced was the company’s latest DGX AI supercomputer, a two-rack cluster based on the NVIDIA GB200 NVL72 liquid-cooled system, with each rack containing 18 NVIDIA Grace Hopper CPUs and 36 NVIDIA Blackwell GPUs, connected by fourth-generation NVIDIA NVLink switches.

While NVIDIA didn’t announce any power consumption figures, industry estimates place the power requirement at approximately 50 kW per rack, and it is entirely possible those are conservative numbers.

If you are a hyperscaler or a large colocation provider, you’re already providing high density racks and liquid cooling for some portion of your data center. But what about everybody else?

Wholesale changes to your physical infrastructure are expensive and unlikely in the short term, and many vendors are aware of that, and are offering ways to support your own AI infrastructure, from a couple of "U" in your rack systems to full support at scale. Many of these vendors announced products to fit just these needs at the GTC. 

Is It All About the Racks?

NVIDIA has identified many of the vendors who are planning on introducing hardware to fit these needs in their blog. And quite a few of the system vendors were demonstrating their complete rack hardware on the show floor.

NVIDIA had announced prior to the event that they would be showcasing more than 500 servers from their partner, showcasing the NVIDIA GH200 Grace Hopper Superchip,  in 18 racks in the MGX pavilion at the event.

 

Supermicro demonstrating its rack solution at NVIDIA GTC.

So Many Options Will Give Customers their Favorite Thing: Choice

But there is an even broader selection of companies that are making their mark specifically in the liquid cooling market, showcasing their capabilities to take on the highest density and most power intensive applications. Some examples of the different technologies showcased included:

  • At the chip-level was Zutacore at the GTC and made a significant impression with their direct-to-chip, waterless, two-phase liquid cooling system that has been designed for AI and HPC workloads. Partnering with a broad selection of vendors, from Dell to Intel to Rittal to bring their cooling technology to those companies’ HPC and AI solutions, Zutacore could be a standard bearer for how direct-to-chip cooling solutions will impact the industry.
  • Quanta Cloud Technology was there with the latest iteration of their QCT CoolRack, their rack-level direct-to-chip cooling solution. They announced that one of their intelligent liquid cooling rack systems could support 16 of their liquid-cooled server systems, each with two of the GH200 Superchips.
  • Wiwynn, who also announced their rack-level AI solutions for supporting the latest SuperChips and high density computing also drew focus to their purpose-built liquid-cooling management system their UMS 100 (Universal Management System), a modular, open design, that works with various types of liquid cooling environments from racks to immersion systems, focused on real-time monitoring and cooling energy optimization.

As this small selection of three vendors indicates, development and research into advanced liquid cooling systems is ongoing in many technology areas.

The use cases range from building new data centers from the design phase, up to retrofitting existing data centers, to deploying localized AI server implementations almost anywhere.

Bottom line: The effort to make liquid-cooled solutions practical across the entire market is happening quickly, and is no longer a stumbling block for putting an AI solution where your business needs it.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

About the Author

David Chernicoff

David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.

Sponsored Recommendations

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...

3 Steps to Calculate Total Enterprise IT Energy Consumption Using DCIM

Embark on a simplified journey to measure and reduce the environmental impact of your enterprise IT with our practical guide, outlining a straightforward 3-step framework using...

Sashkin/Shutterstock.com

Unpacking CDU Motors: It’s Not Just About Redundancy

Matt Archibald, Director of Technical Architecture at nVent, explores methods for controlling coolant distribution units (CDU), the "heart" of the liquid cooling system.

White Papers

Get the full report

The Affordable Microgrid: Securing Electric Reliability through Outsourcing

Feb. 12, 2022
Microgrids, which use controllers to connect multiple power generation and storage sources, can provide electric reliability but they can also be too complex and costly for businesses...