In this edition of Voices of the Industry, Kris Holla, Group Vice President, Channel Sales at Nortek Data Center Cooling discusses how cooling manufacturers are supporting the pre-fabricated modular data center trend with their own modular skid platform designs.
The pre-fabricated module (PFM) data center is a growing trend, but many designers defeat the modular concept’s benefits of lower total cost of ownership (TCO) and quick installation time by specifying piecemeal cooling plants and electrical infrastructure.
The growing PFM data center industry is expected to become a $4.6 billion market by 2027. PFM benefits are 30-percent less time to complete (rapid construction), 30-percent less in cost, compact footprint, easy-to-rig, customizable/flexible design, lower PUE and other factors.
One reason for the growth is the unknown IT capacity data centers will need next year, let alone 10 or 20 years in the future. The current statistics for daily data creation is in the range of quintillions of bytes and it’s expected to eventually increase exponentially into future “illions” that are far beyond human comprehension capabilities. Therefore, the incomparable scalability of modular design versus stick-built data centers offer a future-proofing benefit.
Unpredictable, exponential growth and the heat loads that accompany it is a major conundrum for data center operators. However, modular data centers are a good solution for cost-effective, expedited scalability. When more IT capacity is needed, another modular building can simply be added.
Cooling manufacturers are accommodating this trend with their own modular skid platform designs. Instead of cooling systems stick-built onsite using a variety of components by various manufacturers, cooling plants are now available in modules that are already factory-assembled, tested and plug-and-play ready on skids before delivery to a PFM data center for quick and easy integration. They’re easily put together in a mix-and-match fashion akin to Legos. Like the PFM data center structure, a modular cooling plant can save upwards of 50-percent in completion time and installation labor. It can also eliminate job site incompatibility, system unreliability and integration issues of piecemeal components.
Modular cooling plants come in various capacities that are suitable for a minimum of 6-10MW modular pods and completely scalable 10, 15 or 20 times that power while also accommodating any heat load size.
These modules include all liquid cooling necessities, such as the indirect evaporative heat exchanger, coils, pumps, controls and other components. They’re plug-and-play units that include everything except the piping loop infrastructure that delivers chilled water to computer room air handlers (CRAH) or coil wall terminal units that deliver cool air to the data hall (See Illustration 1); or chilled water to IT rack-based terminal units, such as cold plate chip coolers or server rack rear door heat exchangers.
When another modular building is added to an expanding site, any number of additional cooling modules can be added to operate in tandem with existing modules or autonomously.
This concept is adaptable to two of the three most common modular data center concepts:
- Modular buildings that are manufactured, factory-assembled and then delivered as complete self-contained structures to the site;
- Modular components that are designed to be manufactured on skids and shipped as multiple sections that are quickly assembled on site.
The third type of modular data center, the ISO container, is not suitable for modular cooling plants because of its small space of 20 x 80-foot or 40 x 80-foot dimensions and 500kw cooling capacity.
Manufacturers are now accommodating the first and second example of PFM data centers by eliminating the cabinet (See Illustration 2) and separating the unit into three skids that can be easily rigged to any position inside PFM data centers, two to 18-story multiple level facilities or even penthouse locations. Indoor installations are favored by facilities located in hurricane or tornado zones. These individually-packaged modules packaged onto skids are designed for easy onsite rigging and gang-connecting. They can also be located remotely from each other and still operate as one large homogeneous cooling plant.
The primary skid is a liquid-to-air indirect evaporative module. It includes the StatePoint Membrane Exchanger (SPEX) (Illustration 3), along with all the necessary factory-installed and tested piping, pumps, dampers and other ancillary equipment. The SPEX is one reason this technology is recording pPUE of 1.06 (1.025 if fan coil wall energy is subtracted) and WUE 0.09, depending on the local climate. This indirect evaporative cooling methodology is the only type suitable for packaging on skids for penthouses in PFM data centers. It’s so flexible for expedited construction that the only limitation is the imagination of the designer.
The second module is the recovery coil/scavenger fan on a skid, which works in harmony with the SPEX Skid (See Illustration 4).
The third module includes a coil and fan array that is combined into a coil wall for supply air to the data hall. The coil wall operates as airside cooling, however it can be used with, or substituted for direct-to-rack liquid cooling with rear door heat exchangers, cold plate chip coolers and other components.
All the modules are virtually plug-and-play, because they feature engineer-specified, factory-installed, foolproof utility connections and are easily ganged together by the jobsite installation contractor. Some manufacturers offer a complete cooling system comprised of the cooling plant and the terminal distribution units for system compatibility and single source responsibility.
PFM facilities are a rapidly growing segment of the data center industry, which is the backbone of today’s digital economy. The sector’s TCO can be greatly reduced if it also employs modular cooling concepts, such as factory-integrated modular units on skids to cool their data halls similar to currently-adopted modular electrical plants and other data center infrastructure components.