The Evolution of Data Center Cooling: Liquid Cooling

Aug. 19, 2021
The challenge for many in the data center industry is how to leverage density and space more effectively while still being able to scale critical resources. A new Data Center Frontier special report, courtesy of TMGcore, looks at how liquid cooling is driving the evolution of the data center industry.

The type of infrastructure that data centers support today is already different from a few years ago. Greater density levels, new types of workloads like AI and cognitive systems, and a vast reliance on data create a redesign mentality for data center leaders. The challenge for many in this industry is how to leverage density and space more effectively while still being able to scale critical resources. Air-cooling has led the industry in deployments for some time, and liquid cooling is now making significant strides to redefine density and next-generation computing capabilities. This launches our special report series on “The State of Data Center Cooling: A Key Point in Industry Evolution and Liquid Cooling.”

Download the full report.

Introduction

As we look out into the data center world, it’s pretty clear that a lot has changed. If 2020 taught us anything, it’s that the reliance on our digital infrastructure is more than ever before. There are new demands around cloud computing, big data, and infrastructure power and cooling efficiency. This change in the data center is driven by more users, more data, and a lot more reliance on the data center itself. With cloud technologies and the rapid growth in data leading the way within many technological categories—working with the right data center optimization technologies has become more critical than ever.

This is where liquid cooling comes into play. We have seen more applications for the technology. Real-world use-cases are impacting the way we deploy servers and various systems. This involves everything from liquid cooling GPUs to deploying a complete, all-in-one compute, storage, and network architecture with liquid cooling built-in. Here’s the other significant point—liquid cooling isn’t going anywhere. That industry is growing fast.

The fascinating part is that we see even more conversations and use liquid cooling solutions cases. Consider this, Gartner estimates that ongoing power costs increase at least 10% per year due to cost per kilowatt-hour (kwh) increases and underlying demand, especially for high power density servers.

Overall, ensuring the most optimal cooling solutions for your data center needs to be on your radar. According to a recent study, on average, servers and cooling systems account for the most significant shares of direct electricity use in data centers, followed by storage drives and network devices (Figure 1).

Some of the world’s largest data centers can each contain many tens of thousands of IT devices and require more than 100 megawatts (MW) of power capacity—enough to power around 80,000 U.S. households (U.S. DOE 2020).

This means that the way we optimize and cool our data centers must change and evolve as well.

Liquid Can Cool Entire Digital Infrastructure Solutions

First of all, liquid cooling is a very real technology that’s been around for quite some time and is being leveraged by numerous data centers for various use-cases.

Why is this happening? Rising investments in high-density technology, high-performance computing, and even smart city initiatives are technology leaders develop the most reliable and efficient methods to cool their data centers. Furthermore, increasing volumes of data are generated to create the demand for data centers, and these centers consume a considerable amount of energy. In 2016, data centers consumed 416.2 terawatt-hours of energy, accounting for 3% of global energy consumption and nearly 40% more than the entire United Kingdom. This consumption is expected to double every four years.

A recent report from Technavio indicates that the adoption of liquid-based cooling is high as they are considered more efficient than air-based cooling. Water-based cooling, a sub-segment of liquid-based cooling, is the most widely accepted cooling system. The global data center cooling market by the liquid-based cooling technique is expected to increase through 2020, posting a CAGR of almost 16% during the forecast period.

Furthermore, accordingto Stratistics MRC, the Global Data Center Liquid Cooling market is estimated at $0.64 billion in 2015. It is expected to reach $3.56 billion by 2022 growing at a CAGR of 27.7% from 2015 to 2022. The rising need for eco-friendly solutions and increasing server rack density is the key drivers fueling market growth.

What does this mean for you? Where should you be looking when it comes to liquid cooling solutions?

Let’s dive into the state of data center cooling and better understand how liquid cooling changes data center design. Most of all, to get started, you’ll see how liquid cooling solutions today are much different than ever before, making adoption far more feasible.

Liquid Cooling: Key Updates, Demands, and Trends

An interesting misconception in the digital infrastructure industry is that liquid cooling concepts are relatively new. Many believe that working with particular kinds of liquids designed for servers is something from the past decade. However, this technology has been around for quite some time, and it’s battle-tested.

Between 1970 and 1995, liquid cooling was used within mainframe systems. Then, in the 90s, we began to see more gaming and custom-built PCs adopt liquid cooling for high-end performance requirements. Between 2005 and 2010, liquid cooling found its way into the data center with chilled doors. From there, looking into 2010 and beyond, liquid cooling was used in high-performance computing (HPC) environments and designs featuring direct contact and total immersion liquid cooling solutions.

Understanding that liquid cooling is quickly becoming a new means to support advanced systems, it’s essential to look at key market trends impacting overall cooling solutions. The focus on data center technologies has been front and center over the past couple of years. Hyperscale providers, alongside colocation partners, know that the design of their data centers is evolving. They’re being tasked with supporting more systems. There is more diversity in the types of workloads deployed, and digital infrastructure is now working with computing systems much different from just a few years ago.

A Changing Landscape for Data Center Cooling and Power

Within the data center, specific shifts in modern-ization efforts and new requirements to support evolving digital solutions shape how we build tomorrow’s digital infrastructure. Consider these key trends:

Cooling

New solutions around convergence, edge computing, supercomputers, even high-performance computing are all placing new cooling capacity burdens on data center architectures. As the latest Markets and Markets report indicates, the airflow management market was valued at USD 419.8 Million in 2016 and is expected to reach USD 807.3 Million by 2023, at a CAGR of 9.24% between 2017 and 2023. The airflow management market is driven by the growing demand for reducing OPEX, increasing cooling capacity, improving IT equipment reliability, and greener data centers. Factors such as the increasing number of data centers worldwide and improving cooling efficiency and thermal management in data centers drive the growth of the airflow management market.

The rising need for eco-friendly solutions and increasing server rack density are the key drivers fueling market growth.

Outside of airflow, new compute requirements are also shaping the liquid cooling market. As mentioned earlier, according to Stratistics MRC, the Global Data Center Liquid Cooling market was estimated at $0.64 billion in 2015 and is expected to reach $3.56 billion by 2022, growing at a CAGR of 27.7% from 2015 to 2022. The rising need for eco-friendly solutions and increasing server rack density are the key drivers fueling market growth.

Power

Leaders in the technology and business space are being asked to provide more technology solutions (cloud, AI, IoT, HPC, etc.) while still retaining optimal efficiency levels. The challenge with that request is that, globally, data center power consumption continues to grow. A recent US Department of Energy report indicates that US data centers are projected to increase the amount of energy they consume based on current trend estimates. A trend that’s been steadily rising since 2000.

There have been growing trends showing that more organizations want to leverage data center colocations and even the cloud to support their growing demands. In doing so, energy efficiency and data center management are essential planning and design considerations. First of all, you want your solutions to be cost-effective and support growth. Second, you’re also trying to reduce the amount of management overhead while still improving infrastructure efficiency.

Numerous trends are indicating that more organizations are going to be placing their environments into some data center. This means enterprise, colocation, or even cloud. From there, energy efficiency and data center management will continue to be critical considerations for a few reasons. Not only are data center administrators working hard to cut costs, but a significant business objective is also to minimize management challenges and improve infrastructure efficiency. To support these new initiatives, liquid cooling has become a technology to explore and implement alongside infrastructure efficiency.

Cloud

In the 2021 AFCOM State of the Data Center Report, we found that more than half of respondents (58%) reported noticing a trend for organizations to move away from the public cloud and looking to colocation or private data centers. It’s important to note that the cloud isn’t going anywhere. However, there are still real concerns regarding how enterprises want to use cloud computing. So much so that an entirely new position has been created to deal with cloud costs and ‘sticker shock.’

According to a recent blog, the tremendous savings seen from the switch to up-front CapEx investments in information technology to subscription mode soon gets muddied as the rising monthly bills come in for services that nobody knows where and when used. And so, new technology and operational disciplines were born: FinOps. In this profession, people leverage tools and new methodologies to monitor, measure, and mitigate the costs and value delivered from the cloud. FinOps practitioners’ perspectives (yes, they are out there) provide a good understanding of what lies ahead in the cloud.

“The dirty little secret of cloud spend is that the bill never really goes down,” says J.R. Storment, executive director of the FinOps Foundation.

This year, according to the survey, the number of people seeing repatriation of workloads from cloud back to on-premise data centers or colocations was 58%. This indicates that most are still working to figure out what should live in colocation and what should reside in the cloud. The good news is that these exercises are great for everyone. Workloads that belong in the cloud will be more adequately provisioned, while dedicated resources that are expensive in the cloud are moved on-premises. This translates to more organizations leveraging their data centers or data center partners to host applications, data sets, and even new kinds of workloads that may have at one point resided in the cloud.

Outside of your traditional converged, hyperconverged solutions, cloud, and even virtualization systems, colocation and hyperscale leaders now deliver support for some advanced solutions.

Supporting New Types of Workloads

Although we’ll cover new use-cases in an upcoming article, it’s important to note that data centers are tasked with supporting new and emerging use-cases. Outside of your traditional converged, hyperconverged solutions, cloud, and even virtualization systems, colocation and hyperscale leaders now deliver support for some advanced solutions. For example, high-performance computing (HPC) is being leveraged for research and data analysis. These systems are being deployed within traditional data center walls. H

owever, cooling for HPC systems requires a different kind of approach. For example, in 2018, as part of a partnership with Sandia National Laboratories, data center leaders installed a fixed cold plate, liquid-cooled rack solution for high-performance computing (HPC) clustering at the National Renewable Energy Laboratory’s (NREL’s) Energy Systems Integration Facility (ESIF).

Technology leaders are looking for platforms that allow organizations to rethink how they deploy these critical enterprise resources to provide the maximum return on their investment and the highest end-user experience levels.

This new fixed cold plate, warm-water cooling technology, and manifold design provide easy access to service nodes and eliminates the need for auxiliary server fans. Sandia National Laboratories chose NREL’s HPC Data Center for the initial installation and evaluation. The data center is configured for liquid cooling and has the required instrumentation to measure flow and temperature differences to facilitate testing. To support the initiative, the deployment focused on three critical aspects of data center sustainability:

  1. Efficiently cool the information technology equipment using direct, component-level liquid cooling with a power usage effectiveness design target of 1.06 or better;
  2. Capture and reuse the waste heat produced; and
  3. Minimize the water used as part of the cooling process. There is no compressor-based cooling system for NREL’s HPC data center. Cooling liquid is supplied indirectly from cooling towers.

Innovation around liquid-cooled data centers and specific use-cases doesn’t stop at HPC. Emerging requirements around machine learning, financial services, healthcare, CAD modeling and rendering, and even gaming are exploring new liquid-cooled solutions to maintain sustainability, reliability, and the greatest levels of density.

Download the full report, “The State of Data Center Cooling: A Key Point in Industry Evolution and Liquid Cooling” courtesy of TMGcore to learn how new data center and business requirements are shaping digital infrastructure. In our next article, we’ll take a closer look at the evolution of the adoption of liquid cooling.

About the Author

Bill Kleyman

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Bill is currently a freelance analyst, speaker, and author for some of our industry's leading publications.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Gorodenkoff/Shutterstock.com
Source: Gorodenkoff/Shutterstock.com

Transforming Data Center Management with Centralized DCIM Insights

Ray Daugherty of Modius explains why centralized data management is vital for robust data center governance.

White Papers

Dcf A10 Sr Cover 2023 01 17 14 23 57

The Security Gap: DDoS Protection in a Connected World

Jan. 18, 2023
The world is in love with connectivity, but it comes with a whole host of challenges for data centers. As customers continue to shift to the cloud and colocation services, security...