Progress on Aisle Containment for Data Center Cooling

Jan. 7, 2016
In today’s discussion, our panel of three data center executives – Jakob Carnemark of Aligned Data Centers, Robert McClary of FORTRUST, and James Leach of RagingWire Data Centers – will examine progress in data center cooling strategies using aisle containment.

Today we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of three experienced data center executives – Jakob Carnemark of Aligned Data Centers, Robert McClary of FORTRUST, and James Leach of RagingWire Data Centers – will examine progress in data center cooling strategies using aisle contaiment. The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.

Aisle Containment As A Data Center Cooling Strategy

Data Center Frontier: Airflow containment is an established strategy for optimizing data center cooling. How would you assess the adoption of containment? Does it differ between the hyperscale, enterprise and multi-tenant markets?

Robert McClary, FORTRUST

Robert McClary: Here’s the key: it’s inefficient any time hot and cold air mix in a data center and it’s not across the heat sinks of IT equipment and hardware, because you’re doing twice the work. Air containment and airflow dynamics are absolutely strategies for optimizing data center cooling. You want the cooling going to the hardware and across the hardware’s internal cooling elements in the most effective and efficient way. Cooling that goes anywhere else just starts to breed inefficiency.

We should start designing and building data centers that are capital, energy and resource efficient and are designed for long term operations. We do that by thinking about the IT hardware that the customer uses, all the way up through to the utility source. Everything must be optimized and able to adapt to what is occurring at the end-user hardware. We don’t design with the end-user or the IT equipment stack in mind when we design data centers and we have to start doing that.

James Leach: We are seeing a shift in the data center cooling conversation from economization to containment.

Over the last five years, the cooling conversation was between data center providers and data center technology suppliers to improve the performance of the data center by designing and implementing sophisticated economization systems that take advantage of local weather conditions. The most common example is using free air cooling when the outside temperature and humidity allows.

James Leach, RagingWire Data Centers

Today, the cooling conversation is between data center providers and data center buyers to design and implement targeted containment systems to meet the unique requirements of the customer. This discussion is typically based on CFD analysis (computational fluid dynamics) to understand air flows within the data center facility and the customer’s computing environment. Currently, the most common approaches are cold-aisle containment for newer data centers with high ceilings and good air flow, and hot-aisle containment (chimneys) for older data centers with lower ceilings that need to force the warm air out of the building.

The difference between hyperscale, enterprise, and multi-tenant containment adoption is largely driven by the design of the data center and the nature of the applications.

Hyperscale data centers tend to be optimized for well-defined, consistent systems configurations where the targeted cooling systems are built-in. In these environments, we are seeing the emergence of liquid cooling, in-chassis cooling, in-rack cooling, and rear-door heat exchangers.

Enterprise data centers typically must support legacy systems such as mini-computers and mainframes that have specialized cooling requirements. These facilities are often older (greater than 10 years) and may not have been maintained or upgraded over time so that they are challenged to support higher density deployments. We tend to see hot-aisle containment systems in these environments as a “bolt-on” to legacy systems.

Multi-tenant colocation data centers are typically newer (less than 10 years old) and have been upgraded over time to support higher density deployments. Colo providers should work closely with their customers to support containment systems tailored to their unique environments.

Jakob Carnemark, Aligned Data Centers

Jakob Carnemark: For many data center operators the cost to renovate an existing site to take advantage of new containment technology is prohibitive. When it comes to reducing energy consumption, each watt of waste translates into lower service margins.

Hyper-scale operators and some progressive enterprise users are acutely aware of this and have spent the capital to ensure their data centers are optimized for delivering the greatest output with the least waste.

Multi-tenant operators tend to suffer the most as they have less control over the equipment and applications running inside their data centers. For them, containment can prove a challenge.

We recognized this hurdle early on and engineered our data centers with a patented and proven heat removal technology that consumes up to 85 percent less water with a nominal power draw. As a result, we are able to guarantee our clients a 1.15 PUE. For some of our large clients, the cost savings will be substantial.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Get Utility Project Solutions

Lightweight, durable fiberglass conduit provides engineering benefits, performance and drives savings for successful utility project outcomes.

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

JDzacovsky/Shutterstock.com

Coolant Distribution Units: The Heart of a Liquid Cooling System

nVent's Abhishek Gupta explains why CDUs are at the core of driving the efficiencies that liquid cooling can bring to data centers, so choosing the right one is critical.

White Papers

Get the Full Report

Using Simulation to Validate Cooling Design

April 21, 2022
Kao Data’s UK data center is designed to sustainably support high performance computing and intensive artificial intelligence. Future Facilities explores how CFD can validated...