The Opportunity in Data Center Cooling

Sept. 17, 2015
The Data Center Executive Roundtable panel discusses the innovation landscape in data center cooling, a key area where facility operators can improve efficiency and manage costs.

Today we conclude our inaugural Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In today’s discussion, our panel of four experienced data center executives – Chris Crosby of Compass Datacenters, Rob McClary of FORTRUST, Douglas Adams of RagingWire Data Centers, and Harold Simmons of United Metal Products – will look at the innovation landscape in data center cooling, one of the primary areas where facility operators can improve efficiency and manage costs.

The Innovation Landscape in Data Center Cooling

Data Center Frontier: Cooling has been a key focus of energy efficiency efforts in the data center. Is there still opportunity for innovation in cooling? If so, what might that mean for how data centers are designed and where they are located?

Harold Simmons: There are definitely still opportunities. Even though ASHRAE has raised the recommended temperature level to the server inlets, many organizations have been slow to adopt the newly recommended design criteria. There is still very much a legacy mindset at a psychological level for many when it comes to data centers.

Harold Simmons, United Metal Products

That being said, each data center owner has unique design criteria, functional requirements, and SLA’s (whether they are internal or external) that they are required to meet. As a result of this it is imperative that manufacturers continue to innovate cooling solutions that meet specific design requirements.

In addition to this, one of the most important areas when it comes to cooling is water usage. Often times when data center owners and operators mention water use, they are focused at a localized level. This is because in most parts of the country, water usage is considered at a municipal level. That being said, when water usage is being examined it is imperative that the total hydro footprint of an operation is taken into account – not just the water used on site in order to provide cooling, but also the water that is used at the power plant to produce electricity. This means that cooling systems that use no localized water, but high amounts of electricity can actually have a higher hydro footprint.

Chris Crosby: I think cooling is always going to be a consideration for data centers. Whether they get denser or average per rack usage never gets above of 6-8kW, end users are always going to be looking for a betterway – both technically and economically – to provide an optimal data floor environment.

One important point to note in this discussion is that when people hear or see the term “innovation” they automatically think of some physical technological enhancement. This overlooks the improvements that are defined by the redefining of standards. For example, more and more end users that I speak with are operating their facilities at the higher recommended and allowable thresholds defined by ASHRAE TC9.9 committee. (Quick aside: TC9.9 is a great example of how industry can come together to improve things dramatically without huge cost or regulation. TC9.9 cannot produce a standard unlike its 90.1 bigger brother, just recommendations to industry).

Chris Crosby, CEO, Compass Datacenters

Not too many years ago, the idea of running your environment at any temperature above “meat locker” would have been considered heretical, but the combination of empirical evidence and corporate budget pressures continue to lead large numbers of data center operators to embrace a new “religious” perspective. The bottom line here is that I don’t think we can overlook the impact of non-technological innovation in changing the nature of data center cooling.

Of course, we should also expect to see more innovation on the physical side of the cooling issue. Right now, for example, the impact of drought in a number of geographic areas is forcing end users to take a hard look at their water-based systems. This is not a non-trivial consideration when you consider that the volume of water required to operate these facilities can run in the millions of gallons. This turn of events has people more closely looking at direct expansion functionality, or, at the least some type of hybrid offering that enables air-side capability to augment water-based installations.[clickToTweet tweet=”Crosby: Not many years ago, running at any temperature above “meat locker” would be heretical.” quote=”Crosby: Not many years ago, running at any temperature above “meat locker” would be heretical.”]

I think that this is a positive development, as water has been treated as a cheap, almost free resource within the data center space for too long a time. When added with the drumbeat of regulatory restrictions that has started to rear its head, I do believe you will see increased emphasis on both standards and technological innovation in the coming years.

Douglas Adams: From a technology perspective, the first generation of data centers was largely driven by telecommunications. The second generation was power driven. The next generation of data centers will tackle the cooling challenge.

Douglas Adams, RagingWire Datacenters

The key to this issue is to avoid putting the burden on the end user. In today’s sophisticated IT market, data center providers need to add flexibility and reduce cost, but not require a client to adopt the provider’s specific requirements. For example, some providers require clients to use their specific racks, which may not handle non-conforming IT equipment (AS-400’s, mainframes, side-vent routers, etc.). This approach spells disaster for end users that are increasingly dependent upon prescribed pod architectures that cannot easily conform to the data center provider’s requirements.

We don’t see a “silver bullet” technology as the answer in cooling. Rather it will be an integrated and automated system that extends from the data center facility to the individual rack and server. At the data center level you will see industrial components that operate efficiently at all levels of utilization – zero waste. At the rack and server level you will see increasingly targeted cooling capabilities that can be deployed without server downtime. The bridge will be software that monitors, manages, and adjusts the system based on external environmental factors (temperature, humidity, etc.) and computing usage (processor, storage, network devices, etc.).

For example, by using simple and cost effective hot aisle and cold aisle containment systems progressive data center providers are able to get very close to or even exceed the PUE’s of providers that require special racks and containment systems. Given the cost of the specialized racks and the small gains in efficiency, the lack of ability to handle non-conforming equipment and the lack of ROI make them not worth the effort.[clickToTweet tweet=”Adams: Components will operate efficiently at all levels of utilization, with zero waste.” quote=”Adams: Components will operate efficiently at all levels of utilization, with zero waste.”]

Of course, that is today. Future technologies will change and there may be more flexible future solutions that have an acceptable ROI and still give the flexibility necessary for multi-tenant data center providers.

Rob McClary: There are many methods that can be used for cooling data centers. These methods can be dictated by where the data center is located and what sources of cooling and energy they have available to them.

Robert McClary, FORTRUST

There will always be opportunity for innovation in cooling, but we need to spend just as much time considering the actual consumption at the IT equipment stack. We should be looking at alternative energy sources along with more efficient hardware. Data center infrastructure that modulates to the real time IT hardware demand is more efficient and needs to become the norm. We should not be utilizing infrastructure distribution and cooling models set up for the worst case scenario or a perceived load that doesn’t exist, which is wasteful.

PUE will be an ongoing discussion in our industry, but people need to step back and realize that purpose-designed and purpose-built data centers are better than a server closet in an office building. I have yet to see a PUE on a server closet, but I’m sure just about any purpose built data center is better.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Conduit Sweeps and Elbows for Data Centers and Utilities

Data Centers and Utilities projects require a large number of electrical conduit sweeps and elbows. Learn why Champion Fiberglass is the best supplier for these projects.

Prefabricated Conduit Duct Banks Enable Smooth and Safe Electrical Installation for a Data Center

Prefabricated conduit duct banks encourage a smooth, safe electrical conduit installation for a data center.

sdecoret/iStock.com, courtesy of ark data centers
Source: sdecoret/iStock.com, courtesy of ark data centers

CMMC 2.0: Fueling Competitiveness with Compliance

John Kehoe, Chief Operating Officer at ark data centers, breaks down the hype around the U.S. Department of Defense Cybersecurity Maturity Model Certification (CMMC) 2.0.

White Papers

Dcf Service Express Sr Cover2023 07 07 15 37 53

Top Methods To Modernize and Balance Your Infrastructure

July 10, 2023
The growing number of connected devices, the increased use cases around mobility and a greater need for data center reliability are all driving growth in cloud and data center...