Best Practices for Deploying Liquid Cooling in Existing Data Centers

July 10, 2023
Data center operators will likely deploy liquid cooling solutions gradually. Learn how to mix and match solutions to fit your specific needs.

Last week we launched our article series on liquid cooling in data centers. This week we'll explore some common liquid cooling systems, as well as the logical progression for deploying liquid cooling solutions. 

The liquid is then circulated back to the IT devices. This technology is most commonly represented by cold plates which sit directly on CPUs, GPUs and memory, but also applies to technologies like rear door heat exchangers which move heat from an entire rack to the external radiator system.
  • Liquid-to-liquid cooling operates in a similar fashion to liquid-to-air cooling but makes use of the higher heat load capabilities of liquids when compared to In this situation, there is often a closed loop system that performs the initial cooling of the IT equipment. That system is cooled by a secondary liquid system. The cooled liquid is then pumped back to the IT devices while the secondary system uses a radiator or cooling tower to remove the heat from that liquid so that it can then be cycled back to cool the primary system. This system adds a layer of complexity when compared to liquid-to-air cooling but is capable of cooling significantly higher power densities making it a more efficient solution for those environments.
  • Direct immersion cooling involves the hardening of the electronic components so the entire server can be submerged in a dielectric liquid The coolant, which can be as simple as mineral oil or as complex as specialized fluids designed for this application, is then circulated around the servers and then through external heat exchangers. This method of cooling is extremely effective at cooling high-performance systems and can be more effective than other forms of liquid cooling. Immersion cooling, while having its own specific complexities, does have the advantage of reducing the need for traditional cooling tools such as air conditioning, chillers, and the like. Because the fluid involved is non- conductive, any potential electronics issues due to a leak are minimized, as well.
  • None of these solutions are exclusive and can be mixed and matched to meet specific cooling needs.

    IS THERE A LOGICAL PROGRESSION FOR DEPLOYING LIQUID COOLING SOLUTIONS?

    Most enterprises are going to be looking at adding liquid cooling solutions to their data centers as the need for the additional cooling capabilities becomes clear. So where can we start and what are the goals?

    Most data centers find that they need to support a specific task that requires high densities and increased cooling capabilities. Right now, this is likely to be a cluster deployment for high performance computing or to support artificial intelligence and machine learning. While you can utilize many of the standard data center techniques (hot aisle containment, etc.) dropping a significant source of heat into an existing facility can bring a new set of problems. That is what makes this the perfect opportunity to begin to introduce liquid cooling solutions to your data center. While choices such as in-row cooling, rear door heat exchangers, direct-to-chip cooling, and immersion cooling are all available options, starting with the simplest solution, such as a passive rear door heat exchanger, can minimize the impact on your data center while allowing optimal performance of your high-density computing solution.

    This variety of liquid cooling options makes it possible for liquid cooling to be deployed gradually, rather than as a rip and replace solution. It also allows for interim choices. You may choose to use the rear door heat exchanger for a single rack while you build out a more complete liquid-to-liquid cooling solution that will be available as the high-density deployment grows. Or you can save the complex deployment for your next generation data center and have a progressively increased level of cooling solutions available in your existing space, choosing to add RDHx systems, enclosure cooling solutions and liquid cooled cabinets that now give you a variety of solutions that can be matched to the demands of your IT workloads.

    There is significant flexibility available with cooling choices that do not require reengineering your data halls or entire data center. Mixing and matching those solutions to the specific demands of the IT hardware can increase your efficiency in the data center while making more options available to meet those specific needs. An average server in your existing data center generates about 1.5kW of heat; according to Nvidia, a latest generation AI server using their GPUs can generate five or six times that much, so configuring your entire data center to support that level of cooling demand is unlikely to be efficient; finding the right solution to solve that point problem will be the short- term answer.

    It is also important to note that not everything in the data center needs to be liquid cooled or even should be liquid cooled at this point in time. Devices like switches, routers, and network interface cards typically aren’t liquid cooled, as heat generation is rarely an issue. Storage devices are just beginning to see the availability of specific tools to keep them operating at lower temperatures, as heat can reduce MTTF, though they don’t see the same huge heat generation that a rack of AI GPUs can create, so adding specific cooling for storage is definitely a point solution that few will require. Other common data center equipment such power distribution units, backup batteries and the various other pieces of electronics found in the data center don’t often require additional cooling, though if you chose to, using liquid cooled enclosures would enable you to cool any rack mounted equipment you would choose to install in such an enclosure. These devices are rarely the point of failure due to an increase in heat in the data center.

    Download the full report, Liquid Cooling is in Your Future, featuring nVent, to learn more. In our next article, we'll share tips for evaluating your environment for the move to liquid cooling.

    About the Author

    David Chernicoff

    David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.

    Sponsored Recommendations

    Guide to Environmental Sustainability Metrics for Data Centers

    Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

    The AI Disruption: Challenges and Guidance for Data Center Design

    From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

    A better approach to boost data center capacity – Supply capacity agreements

    Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

    How Modernizing Aging Data Center Infrastructure Improves Sustainability

    Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

    Image courtesy of EXFO

    Navigating the Future: Upgrading Networks in Data Centers for 400G  

    Nicholas Cole, Data Center Solution Manager at EXFO, explains why the journey towards 400G and beyond is not merely about keeping pace but also ensuring that every step forward...

    White Papers

    DCF_KohlerWPCover_2021-12-08_8-46-39

    Data Center Generator Maintenance

    Dec. 10, 2021
    A new white paper from Kohler Power Systems explains the feasibility and benefits of no-load exercising for diesel generator operators.