Cooling for High[ER] Density Spaces Should be a Design Feature, Not an Afterthought

June 2, 2021
Higher density demands a specialized cooling strategy, yet many data center operators cool the entire room rather than the equipment inside. Doug Ausdemore of Data Aire explains how higher density data centers benefit from a specialized cooling strategy. 

In this edition of Voices of the Industry, Doug Ausdemore, Senior Product Development Manager, Data Aire explains how higher density data centers benefit from a specialized cooling strategy.

Doug Ausdemore, Senior Product Development Manager, Data Aire

Few data centers live in a world of ‘high’ density, a number that is a moving target, but many are moving to high[er] density environments. Owners of higher density data centers often aren’t aware of how many variables factor into cooling their equipment. The result is that they spend too much on shotgun solutions that waste capacity when they would be better served by taking a rifle shot approach. This means understanding the heat dispersion characteristics of each piece of equipment and optimizing floor plans and the placement of cooling solutions for maximum efficiency.

So, how do you invest in today and plan for tomorrow? By engaging early in the data center design process with a cooling provider that has a broad line of cooling solutions, owners can maximize server space, minimize low pressure areas, reduce costs, save on floor space and boost overall efficiency. And by choosing a provider that can scale with their data center, they can ensure that their needs will be met long into the future.

Density is Growing: Low to Medium to High[er] and Highest

Data centers are growing increasingly dense, creating unprecedented cooling challenges. That trend will undoubtedly continue. The Uptime Institute’s 2020 Data Center survey found that the average server density per rack has more than tripled from 2.4 kW to 8.4 kW over the last nine years. While still within the safe zone of most conventional cooling equipment, the trend is clearly toward equipment running hotter, a trend accelerated by the growing use of GPUs and multi-core processors. Some higher-density racks now draw as much as 16 kW per rack, and the highest-performance computing is demanding typically up 40-50 kW per rack.

High[er] Density Requires Dedicated Cooling Strategies

For the sake of discussion, let’s focus on the data centers that are, or may be, in the 8.4-16 kW range in the near future.  This higher density demands a specialized cooling strategy, yet many data center operators waste money by provisioning equipment to cool the entire room rather than the equipment inside. In fact, “Overprovisioning of power/cooling is probably more common issue than under provisioning due to rising rack densities,” the Uptime survey asserted.

No two data centers are alike and there is no one-size-fits-all cooling solution. Thermal controls should be customized to the server configuration and installed in concert with the rest of the facility, or at least six months before the go-live date.

Equipment in the higher density range of 8-16 kw can present unique challenges to precision cooling configurations. The performance of the servers themselves can vary from rack to rack, within a rack and even with the time of day or year, causing hotspots to emerge.

Higher-density equipment creates variable hot and cool spots that need to be managed differently. A rack that is outfitted with multiple graphic processing units for machine learning tasks generates considerably more heat than one that processes database transactions. Excessive cabling can restrict the flow of exhaust air. Unsealed floor openings can cause leakages that prevent conditioned air from reaching the top of the rack. Unused vertical space can cause hot exhaust to feed back into the equipment’s intake ducts, causing heat to build up and threatening equipment integrity.

For all these reasons, higher-density equipment is not well-served by a standard computer room air conditioning (CRAC) unit. Variable speed direct expansion CRAC equipment scales up and down gracefully to meet demand. This not only saves money but minimizes power surges that can cause downtime. Continuous monitoring should be put in place using sensors to detect heat buildup in one spot that may threaten nearby equipment. Alarms should be set to flag critical events without triggering unnecessary firefighting. Cooling should also be integrated into the building-wide environmental monitoring systems.

A better approach to developing or upgrading a data center is to build cooling plans into the design of the data center from the beginning with a holistic approach that minimizes hot spots. Alternating “hot” and “cold” aisles should be created with vented floor tiles in the cold aisles and servers arranged to exhaust all hot air into an unvented hot aisle. The choice of front discharge, up flow and down flow ventilation can prevent heat from being inadvertently circulated back into the rack. Power distribution also needs to be planned carefully and backup power provisioned to avoid loss of cooling.

Thinking through cooling needs early in the data center design stage for higher density data centers avoids costly and disruptive retrofits down the road. The trajectory of power density is clear, so cooling design should consider not only today’s needs but those five and 10 years from now. Modular, and variable capacity systems can scale and grow as needed.

The earlier data center owners involve their cooling providers in their design decisions the more they’ll save from engineered-to-order solutions and the less risk they’ll have of unpleasant surprises down the road.

Doug Ausdemore is Senior Product Development Manager, Data Aire of Data Aire. Contact them to learn more about designing a cooling system for your higher density environment.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Conduit Sweeps and Elbows for Data Centers and Utilities

Data Centers and Utilities projects require a large number of electrical conduit sweeps and elbows. Learn why Champion Fiberglass is the best supplier for these projects.

Prefabricated Conduit Duct Banks Enable Smooth and Safe Electrical Installation for a Data Center

Prefabricated conduit duct banks encourage a smooth, safe electrical conduit installation for a data center.

Courtesy of Siemens Smart Infrastructure
Image courtesy of Siemens Smart Infrastructure

New Demands on Your Chilled Water Operations: What You Need to Know

Richard Anderson, National Sales Manager for Data Centers at Siemens Smart Infrastructure, explains how optimizing chilled water system performance can increase cooling capacity...

sdecoret/iStock.com, courtesy of ark data centers
Source: sdecoret/iStock.com, courtesy of ark data centers
Cherdchai101/Shutterstock.com
Source: Cherdchai101/Shutterstock.com
PeopleImages.com - Yuri A/Shutterstock.com
Source: PeopleImages.com - Yuri A/Shutterstock.com

White Papers

Get the full report

A Modern Approach to Disaster Recovery and Business Continuity: What You Need to Know

July 8, 2022
DartPoints presents three questions to consider when creating disaster recovery and business continuity plans.