It seems that while big facilities are getting bigger, there are also architectures that are trying to shrink footprints. So as a result, densities are increasing. Part of the conversation is shifting to density efficiency, being able to support an economy of scale but also the ability to support that in a much more sustainable manner.
Follow the customer’s journey. For most, especially over a multi-year transition, you must be able to accommodate wide ranges within the same facility. It’s about balance and a return out of your portfolio — striving for efficiency with technology and what will benefit the company over time. Next question…what kind of cooling is needed to meet your customer’s journey?
While new facilities may get a lot of airtime in the news, not everyone is trying to build massive data centers. Many are trying to fill the spaces they already have. Now is the time for the data center community to ask reflectively and respectively what this current transition looks like for them? Are they trying to improve operations, or manage efficiency, and how can this transition go more smoothly?
It’s understandable to want to design for 12 – 15 kw per rack, so you are prepared for the foreseeable future, but the reality for many operators is still in that 6, 8, 10, 12 range. So, the concern becomes one of reconciling immediate needs with that of the future.
Scalability and flexibility go hand in hand. It’s important to achieve elasticity to support the next generation of customer types. So as an example, the question being asked in the market today has become, how do you individualize the ability to support hot spots in an efficient manner without burning square footage. Since you are predicting a five, or even 10-year horizon in some cases, space design needs to remain flexible. Do you keep design adaptable to accommodate the possibility of air-side distribution or a flooded room — or the need to go back to chilled water applications for the chip or cabinet level cooling to support a higher density level?
When we’re discussing cooling infrastructure and the need to scale over time, it’s important to understand that we’re talking about designing for three-six times the density for which we’ve been designing for up until this point. Since computer rooms and data centers consume large amounts of power, computer room air conditioner (CRAC) manufacturers, like Data Aire, have dedicated their engineering teams to research, to create the most scalable, flexible and energy efficient cooling solutions to meet their density needs.