Last week we launched our article series on future-proofing high-density AI data centers. This week we’ll explore considerations for engineering the next-generation data center.
Structural innovations for AI/ML workloads
Supporting AI/ML workloads requires fundamental rethinking of traditional facility structures to accommodate new technical demands. Reinforced building shells are designed to support the increasing complexity of infrastructure. Higher ceilings are incorporated to allow the installation of additional mechanical, electrical, and liquid cooling apparatus. This additional vertical clearance is essential to facilitate dense piping for liquid cooling systems, direct-to-chip delivery, and extensive network cabling.
The structural base of these facilities is also specifically engineered to bear significant weight. Heavy-duty concrete slabs replace conventional flooring to support the increased load from liquid cooling equipment, including immersion cooling tanks, which are substantially heavier than typical air-cooled hardware. These enhancements future-proof the facility to accommodate emerging cooling technologies that are anticipated to become more prevalent as AI workloads demand ever-higher densities and cooling efficiencies.
Power distribution implications
Electrical infrastructure of next-generation data centers must adopt modular and scalable architecture to efficiently meet varying power and thermal management needs. Modular generators, uninterruptible power supply (UPS) systems, power distribution units (PDUs), and battery energy storage systems (BESS) are designed to scale in line with growing rack densities and fluctuating load profiles. This scalability allows data centers to respond dynamically to increases in power demand without compromising redundancy or reliability.
Balancing cooling technologies: Air, liquid, immersion
Cooling technology presents one of the most significant challenges as densities increase. Modern facilities often implement a hybrid cooling approach, balancing air- and liquid-based methods to optimize performance and adaptability. Typical near-term configurations exhibit splits around 70% liquid cooling and 30% air cooling, with evolving designs targeting up to 95% liquid cooling to accommodate growing heat densities effectively.
Chilled water plants are engineered with significant overcapacity and modular components to support diverse cooling requirements. Oversizing piping for chilled water and cooling distribution units (CDUs) anticipates future increases in heat loads. These plants are designed for flexible operation, accommodating hybrid cooling strategies such as a split between air and liquid cooling technologies, with seamless transitions toward a predominantly liquid-cooled environment as densities increase.
Liquid cooling methodologies favored include single and two-phase direct-to-chip cooling, where chilled liquid is delivered straight to server processors, and rear-door heat exchangers that remove exhaust heat from racks. Both approaches significantly improve cooling efficiency compared to traditional air systems and repay capital investment by enabling higher-density rack configurations.
Other cooling technologies like immersion cooling remain under industry scrutiny and evaluation. While it offers theoretical benefits for thermal management in the densest deployments, market adoption is still nascent. Regardless of the technology, the data center infrastructure needs to be designed to remain compatible and adaptable with any and all cooling technologies, allowing for relatively straightforward integration without wholesale facility restructuring.
Critical to operational integrity are comprehensive leak detection systems, designed to monitor liquid cooling circuits and prevent damage from potential leaks. Thermal energy storage solutions are also integrated to optimize cooling efficiency, store excess thermal capacity, and manage peak loads. The entire infrastructure supports scalable IT deployments, allowing for incremental increases in compute capacity and cooling demand with minimal facility disruptions.
Whitespace and fit-out considerations
Beyond structural and mechanical systems, careful attention to whitespace and fit-out is vital for supporting the complex interconnectivity required by AI workloads. Meet-me rooms are enlarged and optimized to handle increased fiber volume and high-speed interconnects, which are essential for distributed AI processing.
Network cabling strategies have evolved to ensure low-latency, high-bandwidth connections among GPUs, CPUs, storage appliances, and overlaying networking equipment. Facility logistics, including shipping and receiving areas, are designed to accommodate the large, heavy, and sensitive equipment characteristic of advanced AI deployments.
Together, these structural, power, cooling, and networking innovations deliver a robust foundation to support the scale, density, and performance that next-generation AI and HPC workloads demand.
Reliability and scalability implications
Scalability is embedded across all levels of design (power, cooling, and space configuration) to allow incremental expansion without disruption. Modular subsystems can be upgraded or added in phases, matching evolving workload requirements and providing a pathway to sustainable growth. This holistic technical differentiation supports long-term operational resilience essential for AI’s complex and demanding environment.
Aligning facility designs with customer and hyperscale requirements
Data center infrastructure must be highly adaptive to align with diverse customer demands, ranging from enterprise-scale AI deployments to massive hyperscale training environments. This requires flexible frameworks capable of supporting both mixed and dedicated workloads within the same data hall while accommodating a variety of server densities and cooling regimes.
By developing standardized yet configurable design templates grounded in real customer use cases, facilities can streamline deployment cycles and improve operational predictability. These templates reflect the evolving baseline of requirements set by the hyperscale community, which often pioneers novel infrastructure challenges and solutions, driving broader industry best practices.
Through this ecosystem-centric and forward-looking approach, data centers can overcome the technical and operational hurdles intrinsic to high-density AI deployments for hyperscale and enterprise customers.
Download the full report, The Challenge: Future Proofing High-Density AI Data Centers, featuring EdgeConneX, to learn more. In our next article, we’ll discuss how modular reference designs deliver speed and adaptability.



