Why Colocation Data Center Density Matters

Aug. 27, 2021
When it comes to picking the right data center, the importance of density—and specifically power density—is often overlooked. Jerry Blair of DataBank takes a closer look at three critical density factors and highlights a specific density use case to show why colocation density is an important factor to consider.

In this edition of Voices of the Industry, Jerry Blair takes a closer look at the topic of colocation data center density — and how it can affect cost, performance, and availability.

Jerry Blair, Co-Founder and Senior Vice President, DataBank (Source: DataBank)

Balancing Space, Power, and Cooling
When executives are asked what they want to know about their company’s colocation data centers, they focus on whether applications are available, will they perform as advertised, and what the cost is. However, they often don’t ask about another important factor—density—which has a direct impact on cost, performance, and availability.

Density is the amount of power a data center can provide within a cabinet and what is required to run the application (servers and networking gear) per cabinet. The greater the density, the more compute resources that are available and the faster applications can run.

In this blog, we explain why colocation density should matter to executives, providing an overview of the three key factors IT can use to present the concept. This can help open the door to discussions on how much density the company requires and which colocation data center in which to invest.

As we demonstrate, costs can come down to a delicate balance between space, power, and cooling. While less density makes sense in some cases, it’s always a good idea to be able to acquire more power capacity to handle a hardware refresh or if compute requirements spike.

Density Factor #1 – Less Space Lowers Colocation Costs
The first density factor to consider is the space required to hold all the racks your business needs for servers and networking gear. Colocation providers factor how much space your environment consumes into your monthly rate. This may be directly defined by the square feet of cage space you consume or factored into the kW rate you pay for power (which includes floor space). Either way, it’s usually to your benefit to use the least amount of space possible while using higher power density per cabinet (watts per square foot).

One way to achieve this is to use taller racks. The traditional rack height for years was 42U, but now, many data center providers support 48U or even 52U. A 52U rack provides roughly 25% more space than a 42U rack, allowing you to dramatically shrink your footprint.

Just think, for every five 42U racks, you only need four 52U racks. In a 10,000SF data center hall that supports 400 racks, this allows for an additional 4,000 1U servers. The taller 52U cabinets can also save you on the cost of networking gear for the project.

Density Factor #2 – Space Density and Big Data Drive Higher Power
The drive for better performance—combined with the drop in price for multi-processor blades—has led many IT departments to move to denser server systems. Whether these are 1U servers, blade servers, or even hyperconverged systems, it’s now normal to see rack densities at 10kW of power (up from 3-5kW per rack).

On top of this, the extreme growth of data and the demand for analysis of Big Data has increased dramatically. Just think of all the modern services performed on the vast amounts of data that users currently store: contextual searches of their photos, scrubbing of real-time videos for offensive content, analysis of shopping patterns, and targeted marketing to name a few.

These services and many others all run extremely compute-intensive artificial intelligence, which demands high power consumption hardware. This has led to special HPC (High Performance Computing, i.e., supercomputers) hardware that can drive power consumption at the rack level to 20kW, 50kW, or even 100kW and above.

Density Factor #3 – Cooling Required to Offset Density Heat
Packing a lot of power into less space leads to the third density factor—cooling. While just about all data centers can provide the power for high densities, the issue is cooling the generated heat. It doesn’t do any good to provide 20kW of power to a rack if you can cool only 10kW. The same is true if there’s not enough airflow to properly cool the equipment in your rack.

The best way to address these issues is to have a conversation with your provider on what power densities they can cool and if they have tested to verify them. Additionally, it’s always helpful for them to have existing customers that they can point to as examples of successfully cooling high-density environments.

In general, a rack on a standard raised floor data center with standard server hardware (including blade systems), should support 8kW of active load with no extra cooling measures (standard hot aisle and cold aisle). Beyond that, you will often need some form of containment, whether it’s chimney cabinets, hot aisle, or cold aisle containment.

All of these methods will often support 15-25kW of active load, depending on the data center, and specific server hardware could potentially support a higher load. Going beyond 25kW per rack will almost always require some form of specialized cooling strategy, which can include chilled water rear-door heat exchangers, liquid immersion, or water/coolant piped directly to the server chips.

Again, the key to meeting your cooling requirement is to know your requirements and have the conversation with your provider on how best to meet them.

Use Case: More Density Reduces Networking Gear Costs by 50%
As an example of balancing density requirements, one of our clients had an HPC environment that required approximately 1500U of rack space. They considered footprints with 60 racks and 17kW per rack (effectively only filling half of each rack) as well as 30 racks and 34kW per rack.

The 17kW design was their default as that is the max that many data centers can support. However, this would have roughly doubled the cost of their expensive networking gear, approximately $30K per rack. By allowing the client to use the higher density 34kW design, we cut the number of required racks in half to allow them to save roughly $900K in network hardware costs.

Help to Assess Complex Options
As you assess the ideal colocation data center density for your enterprise applications, your options can be complex. If you work with one of the leading colocation providers, they can help you study the cost options and find the optimal balance among space, power, and cooling.

To determine how well a colocation provider solves this challenge, ask about the power densities they can support per rack and what methods they use to do so. Additionally, check to see if they have high-density customers today, request a description of what they are doing, how they met the density request, and what they have learned from supporting these high-density customers.

The colocation density your company ultimately needs will, likely, be driven by your application requirements along with the upfront installation costs compared to the ongoing power and cooling costs. Either way, the ROI will be more favorable if you form a long-term partnership with your colocation provider. You may not need high density cabinets today, but it would be critically important to know they are available and can scale quickly if the business requirements change tomorrow.

As co-founder, Jerry Blair was instrumental in DataBank’s inception in 2005. In his role, Blair is charged with executing on the company’s sales strategy. With a successful track record spanning more than 20 years in senior sales management, Blair’s experience and proven ability to implement results-driven direct and channel-focused sales programs is a very welcome continued asset to the company. Prior to DataBank, Blair was Vice President of Sales for Switch and Data and LayerOne. He has also served as General Manager of Sales for Lucent Technologies and has held sales management positions with various industry leaders including ICG Communications, Nortel Communications and Wellfleet Communications.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

kubais/Shutterstock.com
Source: kubais/Shutterstock.com

Exploring the Benefits of Liquid-to-Air Coolant Distribution Units (CDUs)

Kevin Roof, Senior Product Manager at nVent, outlines how liquid-to-air cooling works, its benefits and what data center operators should look for when designing and selecting...

White Papers

Dcf Service Express Sr Cover2023 07 07 15 37 53

Top Methods To Modernize and Balance Your Infrastructure

July 10, 2023
The growing number of connected devices, the increased use cases around mobility and a greater need for data center reliability are all driving growth in cloud and data center...