GRC Pushes Density Limits with Support for 200 kW Immersion Racks

May 26, 2021
GRC (Green Revolution Cooling) has launched a new immersion cooling design for extreme-density computing that can support up to 200 kilowatts (kW) of server capacity.

At DCF we continue to closely track trends in IT rack density and its impact on data center cooling systems. That includes the high-performance computing (HPC) sector, which has been the primary venue for extreme density installations of 30 kW per rack and beyond.

GRC (Green Revolution Cooling) has launched a new cooling designed for really extreme density. GRC’s ICEraQ Series 10 immersion cooling module includes a coolant distribution unit (CDU) that can support up to 200 kilowatts (kW) of capacity using a warm water supply, and up to 368 kilowatts with chilled water. The design also allows ICEraQ modules to be positioned end-to-end, fitting snugly against one another so they use less floor space.

GRC was one of the early players in immersion cooling, unveiling its first commercial offering in 2010. The Austin company is using a decade of experience to refine its immersion deployments with the ICEraQ Series 10 design to boost density both inside and outside the module.

“As the next generation of data center immersion cooling solutions, the Series 10 builds on our successful deployments and customer input to improve usability and functionality, with an easy-to-use rack design and a clean aesthetic,” said Peter Poulin, CEO of GRC. “It’s exciting to bring a new form into the market and we look forward to offering this immersion cooling solution to customers struggling with data center cooling challenges.”

Next month, the Series 10 will be deployed at the Texas Advanced Computing Center (TACC), which has been working closely with Austin-based GRC since its launch.  That includes cooling for the GPU-intensive subsystem of the Frontera supercomputer, the ninth fastest supercomputer in the world.

The GRC ICEraQ Series 10 immersion cooling module for data centers. (Image: GRC)

AI, Denser Clouds Boost Immersion Cooling

GRC has been in the forefront of the effort to increase the use of liquid cooling in the data center industry. GRC submerges servers in a tank filled with liquid coolant, rather than using cold air. Servers operate in an enclosure filled with a dielectric fluid similar to mineral oil. They are inserted vertically into slots in the tank, which is filled with  coolant fluid, which transfers heat almost as well as water but doesn’t conduct an electric charge.

This approach offers potential economic benefits by allowing data centers to operate servers without a raised floor, computer room air conditioning (CRAC) units or chillers. Last year GRC raised $7 million to accelerate the development of its immersion cooling technology.

The vast majority of data centers continue to cool IT equipment using air, while liquid cooling has been used primarily in HPC. With the emergence of cloud computing and “big data,” more companies are facing data-crunching challenges that resemble those seen by the HPC sector, which could make liquid cooling relevant for a larger pool of data center operators. Microsoft recently began using immersion-cooled servers in production as it seeks to manage rising power densities and heat in its Azure Cloud data centers.

“With companies such as Microsoft adopting liquid immersion cooling for high-density computing applications, our vision of re-imagined data center cooling is further validated,” said Poulin.

Microsoft is using two-phase immersion cooling, in which servers are immersed in a coolant fluid that boils off as the chips generate heat, removing the heat as it changes from liquid to vapor. The vapor then condenses into liquid for reuse, all without a pump. GRC is the leading player in single-phase immersion, in which the coolant fluid removes the heat using a CDU and a water-cooling loop.

The Series 10’s racks have 42U of space for servers and can accommodate up to four PDUs mounted at the rear of the rack. Networking and power connections are accessible by opening the top lid of the tank.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...

Sashkin/Shutterstock.com

Unpacking CDU Motors: It’s Not Just About Redundancy

Matt Archibald, Director of Technical Architecture at nVent, explores methods for controlling coolant distribution units (CDU), the "heart" of the liquid cooling system.

White Papers

Get the full report

Ethernet in Data Center Networks

Aug. 1, 2022
This white paper from Anritsu discusses Ethernet usage trends in data center networks, as well as the technologies helping operators meet growing bandwidth demands and verify ...