How the AI Revolution Is Reinventing Data Center Cooling

John Palomba of ebm-papst explains how data center operators can bridge the gap between current infrastructure and next-generation computing needs with scalable, modular solutions like RDHx systems.
Oct. 1, 2025
4 min read

From the early days of cloud adoption to today’s AI-powered world, data centers have evolved to handle more computing power in smaller spaces. Now, with the rapid adoption of artificial intelligence and high-performance computing (HPC), that evolution has reached a tipping point.

Applications like large language models, generative AI, and advanced analytics are no longer fringe workloads; they are rapidly becoming the norm. But with their vast processing requirements comes a major operational challenge: they generate orders of magnitude more heat than conventional workloads. Therefore, cooling is now a front-and-center operational priority. For operators, the question isn’t whether cooling must evolve, but how fast it can be reimagined to keep pace with the next generation of computing demands.

At ebm-papst, we’ve been working on that “how” for decades. Today, rear door heat exchanger (RDHx) and liquid-to-air (L2A) fan technologies are helping the industry balance performance, scalability, and sustainability without forcing operators into costly, high-risk overhauls.

Why Cooling Is Becoming More Complex

Recent research shows significant increases in rack power density over the last three years, including:

  • 89% of colocation/data center providers
  • 81% of cloud/hosting/SaaS providers
  • 80% of enterprise data center operators

This growth is largely fueled by AI workloads. For context, a single ChatGPT query consumes nearly 10x the power of a traditional Google search. Scale that across millions (or billions) of queries, and the impact is staggering. Industry projections estimate that by 2030:

  • Data center power demand will rise 160% due to AI adoption.
  • Cooling-related electricity consumption will triple compared to 2023 levels.

That means operators will be forced to move much more heat without consuming proportionally more energy.

Liquid Cooling and Hybrid Cooling

There’s no question that liquid cooling, whether direct-to-chip, immersion, or liquid-to-air, is playing an increasingly important role in modern thermal management. These systems can handle extremely high heat loads with minimal thermal resistance.

Hybrid cooling architectures are also gaining momentum. By pairing targeted liquid cooling for ultra-high-density racks with advanced air movement solutions like rear door heat exchangers, operators can increase total cooling capacity, extend the lifespan of existing infrastructure, and manage costs more effectively.

RDHx: Cooling at the Rack Level

Rear door heat exchangers offer a particularly attractive solution for AI-ready facilities. By removing heat at the rack, before it enters the white space, RDHx systems dramatically reduce the load on room-level cooling and improve energy efficiency.

RDHx-optimized fans are engineered for maximum airflow, high static efficiency, and compact design. Check out some of the benefits:

  • Axials: For in-row cooling and RDHx applications
    • Considered to generate more airflow with higher efficiency
    • Compact, high-power design—up to 8,000 rpm
    • Intelligent monitoring for increased operational reliability and service life
    • High torque for smooth startup even with high back pressure
  • Radials & Centrifugals: For RDHx and L2A systems
    • Optimized for low noise without sacrificing performance
    • Modular design for easy integration and scalability
    • Proven durability in demanding liquid cooling environments
    • High efficiency to minimize operating costs

Upgrading with Scalability and Reliability

One of the biggest barriers to upgrading data center cooling is complexity. Systems designed for plug-and-play integration allow operators to retrofit existing facilities or expand capacity with minimal disruption. Fan systems and modules that can be scaled up or down based on rack density, workload demands, and growth projects help reduce overall scope complexity. This allows for seamless integration into both new and legacy environments, protecting your infrastructure investment while preparing for future AI-driven workloads.

In mission-critical environments like data centers, downtime isn’t an option. Demands for long service life and consistent performance solutions, even in the high-back-pressure conditions common to RDHx and liquid cooling systems, are no longer “nice to have;” they’re essential. Intelligent monitoring capabilities further enhance reliability by enabling predictive maintenance and real-time performance tracking.

Preparing for the AI-Driven Future

AI’s appetite for heat will only grow in the coming years. Facilities that plan for high-density, high-heat workloads today will be better positioned to serve tomorrow’s demands without costly overhauls or downtime.

With scalable, modular solutions like RDHx systems, operators can bridge the gap between current infrastructure and next-generation computing needs. We’re proud to play our part in helping the data center industry meet this moment by developing technologies that keep pace with AI’s demands while reducing energy consumption and carbon impact.

About the Author

John Palomba

John Palomba

John Palomba is Director VAC for ebm-papst Inc. Representing North and South America, ebm-papst Inc. is a key subsidiary of the globally renowned ebm-papst Group, the world’s leading manufacturer of fans and motors. ebm-papst Inc. provides air movement solutions for the data center market with an array of highly efficient products. 

Sign up for the Data Center Frontier Newsletter
Get the latest news and updates.