Supermicro Rises to Meet Massive Growth in Data Center Liquid Cooling Demand

July 25, 2024
Research is looking towards massive growth in the liquid cooling market, and Supermicro is showing how the server hardware will be done.

When Omdia released their data center cooling report last month it was no surprise to the industry that the data center cooling market grew beyond expectations.

After all, with the huge demands for power that the industry is making,  and the number of data centers focusing on high density computing and AI, both with their significant cooling demands, the market's recognition of the need to cool the proliferation of data centers is certainly a bellwether for the industry.

But what is really defining the thermal management industry isn’t as much its record reported growth to $7.67 billion, but the fact that the same report points out the the growth of the market was significantly constrained by production capacity. 

With supply chain constraints reported easing, the thermal management industry is likely to continue to report record growth. And most notable in that growth is that the liquid-cooling business is expected to hit or exceed $1 billion in market share or roughly 17 percent of the market, according to the Omdia report.

The Liquid Cooling Inflection Point

The thermal management market is projected to grow by almost 80 percent over the next four years, and the liquid cooling component will almost double its share of the market, with projects showing liquid cooling taking a third of the entire market.

In new research released this month, industry analyst Dell’Oro Group proclaims that the liquid cooling market is "set to go mainstream" and could surpass $15 billion over the next five years.

“After tracking the data center liquid cooling market for five years, it’s finally transitioning from a niche technology deployed in specific segments of the market to mainstream applicability,” observed Lucas Beran, Research Director at Dell’Oro Group.

Beran added, “Historically, liquid cooling vendors touted increased efficiency and sustainability as factors behind the technology’s adoption. While those benefits remain true, it’s proved to be the increased thermal management performance capabilities, meeting the particularly demanding thermal requirements of high-end processors and accelerated servers, that is the current driving force behind its adoption."

Dell'Oro forthrightly states that the data center liquid cooling market has hit an inflection point, and expects mainstream adoption of liquid cooling technology starting in the second half of 2024. A press release adds that this forecast is expected to materialize over the next five years (2024-2028) in a market opportunity totaling more than $15 billion.

DLC for HPC

According to Dell'Oro's Beran, as this technology adoption occurs, it’s single-phase direct-to-chip liquid cooling (DLC) deployments that are scaling first. Beran said this dynamic is the result of long-standing adoption in the high-performance computing (HPC) industry that has helped establish a more mature vendor ecosystem and end-user know-how to deploy and service the technology. 

Beran emphasized how additionally, NVIDIA has specified single-phase DLC as the cooling technology to support its upcoming GB200 compute nodes. Yet, he was quick to add how other forms of liquid cooling are emerging in the rapidly growing liquid cooling market. 

Dell'Oro's assessment is that both single-phase immersion and two-phase DLC are undergoing testing, validation, and proof of concept work, which is materializing in growing pipelines for those vendors. Two-phase immersion, on the other hand, is facing an uphill battle toward adoption, as it remains particularly challenged by the regulatory environment surrounding PFAS fluid use, said Beran.

Report Chronicles Top 3 Data Center Liquid Cooling Vendors, Leading LC Tech

Dell'Oro's Data Center Liquid Cooling Advanced Research Report found that CoolIT Systems, Boyd, and Motivair were the top three vendors in data center liquid cooling revenues for 2023.

The research found that single-phase DLC was the year's leading data center liquid cooling technology. This trend is expected to continue throughout the report's forecast period. owever, two-phase DLC and single-phase immersion revenues are also forecast to materially grow during this time.

Meanwhile the report finds that the enterprise customer segment, including HPC, was the leading customer segment for data center liquid cooling in 2023.

However, the service provider customer segment, encompassing the analyst's Top 10 Cloud, Rest-of-Cloud, Colocation, and Telco designations, is forecast to significantly outpace the growth of enterprises during the forecast period.

Air-assisted liquid cooling and liquid-to-liquid heat exchange types are both forecast by Dell'Oro to grow at significant double-digit rates during the forecast period. By 2028, these technologies are forecast to account for more than a third of the overall data center thermal management market.

If only for the single reason of the explosive growth of AI hardware and the demand for generative AI solutions, liquid cooling finds itself ideally positioned to ride the cutting edge of data center and technology advances.

Nowhere was this more obvious than at the NVIDIA AI announcements in March of this year, where NVIDIA and their partners announced that they would be supporting and shipping liquid-cooled DGX AI supercomputers and clusters, pretty much immediately.

So Who’s Leading the Hardware Charge?

One vendor who demonstrated support for the NVIDIA Blackwell platform at the March 24 announcement is Supermicro.  The company has been developing rack-scale liquid cooling systems for a few years at this point and announced at the beginning of June that the would have rack-scale, plug-and-play liquid-cooled AI SuperClusters supporting the NVIDIA Blackwell and H100/H200.

According to Charles Liang, president and CEO of Supermicro: "Data centers with liquid-cooling can be virtually free and provide a bonus value for customers, with the ongoing reduction in electricity usage."

Liang continued, "Our solutions are optimized with NVIDIA AI Enterprise software for customers across industries, and we deliver global manufacturing capacity with world-class efficiency. The result is that we can reduce the time to delivery of our liquid-cooled or air-cooled turnkey clusters with NVIDIA HGX H100 and H200, as well as the upcoming B100, B200, and GB200 solutions. From cold plates to CDUs to cooling towers, our rack-scale total liquid cooling solutions can reduce ongoing data center power usage by up to 40%."

3 New Silicon Valley Manufacturing Plants 

And they weren’t kidding about delivering these systems, as two weeks after the hardware announcement Supermicro announced that they were adding three new manufacturing facilities in Silicon Valley, specifically to support the growth of their AI and enterprise rack-scale liquid-cooled solutions.

Liang expects that “liquid-cooled data centers will grow from historically less than 1% to an expected 15% and up to 30% of all data center installations in the next two years. This expansion positions us to capture the majority share of that growth.”

These numbers that align with Omdia’s previously mentioned research. Supermicro plans to maintain inventory of these liquid cooled systems, reducing the customer lead times for their rack cluster deployments from years to only weeks.

Their investment in manufacturing facilities is a good indication of the direction they believe the industry is going.

What Was Once Years in Development is Rapidly Becoming an Off the Rack Solution

The Supermicro CDU (cooling distribution unit) at the heart of the company's liquid cooled racks is capable of supporting up to 100kW. Hardware details of the rack cooling platform can be found here.  

The company's current generation of fully validated and tested liquid cooling rack cluster systems is expected to deliver up to an 89% reduction in the electricity costs of the server cooling infrastructure with up to a 40% reduction in the electricity costs for the entire data center. And like most liquid cooled solutions, they are significantly quieter to operate, with Supermicro claiming up to a 55% reduction in the noise level compared to traditionally cooled servers.

The currently available generative AI SuperCluster offers what Supermicro claims is a doubling of of compute density  by the use of their custom liquid-cooling solution. Each scalable SuperCluster consists of:

  •      256 NVIDIA H100/H200 GPUs in one scalable unit
  •      20TB of HBM3 with H100 or 36TB of HBM3e with H200 in one scalable unit
  •      1:1 networking to each GPU to enable NVIDIA GPUDirect RDMA and Storage for training large language model with up to trillions of parameters
  •      Customizable Al data pipeline storage fabric with parallel file system options
  •      Support for NVIDIA Quantum-2 InfiniBand and Spectrum™-X Ethernet platform  
  •      Certified for NVIDIA AI Enterprise Platform including NVIDIA NIM microservices

Supermicro isn’t only supporting the top-of-the line NVIDIA products at this point, having also announced their new X14 AI, rackmount, multi-node, and edge server families based on Intel® Xeon® 6 CPUs with E-cores and the soon to be announced Intel P-cores Systems With Liquid Cooling.

With technologies targeted at enterprise and edge server deployments, Supermicro is now seen pushing the efficiencies of their liquid-cooling support beyond the AI market into the less-talked-about, but just as essential growth areas for deploying IT compute services.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

About the Author

David Chernicoff

David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.
About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sponsored Recommendations

Tackling Utility Project Challenges with Fiberglass Conduit Elbows

Explore how fiberglass conduit elbows tackle utility project challenges like high costs, complex installations, and cable damage. Discover the benefits of durable, cost-efficient...

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Anggalih Prasetya/Shutterstock.com
Source: Anggalih Prasetya/Shutterstock.com

AI in the Data Center: Building Partnerships for Success

Wesco’s Alan Farrimond explains how the right partnerships can help data centers overcome the barriers to growth and meet the demands of AI.

White Papers

Get the full report

Top 40 Data Center KPIs

July 7, 2022
A new white paper from Sunbird outlines 40 of the most critical data center KPIs that managers should monitor. The report covers KPIs from nine areas: capacity, cost, asset, change...