Follow the customer’s journey. For most, especially over a multi-year transition, you must be able to accommodate wide ranges within the same facility. It’s about balance and a return out of your portfolio — striving for efficiency with technology and what will benefit the company over time. Next question…what kind of cooling is needed to meet your customer’s journey?
While new facilities may get a lot of airtime in the news, not everyone is trying to build massive data centers. Many are trying to fill the spaces they already have. Now is the time for the data center community to ask reflectively and respectively what this current transition looks like for them? Are they trying to improve operations, or manage efficiency, and how can this transition go more smoothly?
It’s understandable to want to design for 12 – 15 kw per rack, so you are prepared for the foreseeable future, but the reality for many operators is still in that 6, 8, 10, 12 range. So, the concern becomes one of reconciling immediate needs with that of the future.
Scalability and flexibility go hand in hand. It’s important to achieve elasticity to support the next generation of customer types. So as an example, the question being asked in the market today has become, how do you individualize the ability to support hot spots in an efficient manner without burning square footage. Since you are predicting a five, or even 10-year horizon in some cases, space design needs to remain flexible. Do you keep design adaptable to accommodate the possibility of air-side distribution or a flooded room — or the need to go back to chilled water applications for the chip or cabinet level cooling to support a higher density level?
When we’re discussing cooling infrastructure and the need to scale over time, it’s important to understand that we’re talking about designing for three-six times the density for which we’ve been designing for up until this point. Since computer rooms and data centers consume large amounts of power, computer room air conditioner (CRAC) manufacturers, like Data Aire, have dedicated their engineering teams to research, to create the most scalable, flexible and energy efficient cooling solutions to meet their density needs.
It boils down to this: to meet your density outlook and stay flexible, what kind of precision cooling system can support your need to maximize server space, minimize low pressure areas, reduce costs, and reduce requirements? You should be encouraged knowing that this ask is achievable in the same kinds of traditional ways, with no need to reinvent the wheel, or in this case, your environmental control system. There are a variety of solutions to be employed, whether DX or chilled water — lots of different form factors, one ton to many tons.
So, whether you’re thinking about chilled water for some facilities or DX solutions (refrigerant based solutions) for other types of facilities, both can achieve scale in the traditional perimeter cooling methodologies without the need for completely rethinking the way you manage your data center and the load coming from the servers. Chilled water solutions may be an option because those systems are getting much larger at the cooling unit level; satisfying the density increase simply by higher CFM per ton. Multi-fan arrays are very scalable. And you can scale down from 25 to 100 percent for the delivery, depending on whether you are trying to scale over the life of the buildout or you’re scaling back to the seasonality of the business for whomever is the IT consumer.
DX solutions are achievable from a good, better, best scenario. Good did the job back in the two to four kilowatt per rack days. However, nowadays, variable speed technologies are well established, and they can scale all the way from 25 to 100 percent just like chilled water.
At Data Aire, our engineers are seeing more dual cooling systems designed at the facility level. And so, dual cooling affords the redundancy of the infrastructure. Of course, that’s important in the data center world. And it also introduces the opportunity for economization.
Density, Efficiency and Economy of Scale. The entire concept of doing more with less — filling the buckets but still needing the environment and ecosystem to scale — is playing an important role in the transition operators are facing. With regards to greater airflow delivery per ton of cooling, it’s extremely achievable without the need to dramatically alter the way you operate your data center, which is essential because every operator is in transition mode. They are transitioning their IT architecture, their power side, and their cooling infrastructure. An efficient environment adapts to IT loads. The design horizon should keep scalable and efficient cooling infrastructure in mind to help future-proof for both known and unplanned density increases.