From Row-Level CDUs to Facility-Scale Cooling: DCX Ramps Liquid Cooling for the AI Factory Era

As AI rack densities surge, operators are moving from row-level CDUs to facility-scale liquid cooling. DCX’s new megawatt-class coolant distribution platforms enable warm-water, chillerless cooling architectures designed to support next-generation AI data centers at hyperscale.
Feb. 5, 2026
8 min read

Key Highlights

  • Traditional cooling architectures struggle to support the increasing power densities of modern AI clusters, necessitating a shift to centralized, facility-scale solutions.
  • DCX's Facility Distribution Units (FDUs) replace multiple row-level CDUs, reducing hardware clutter, operational complexity, and enabling flexible rack placement.
  • The 8MW CDU V2AT2 supports 45°C warm-water cooling, eliminating the need for traditional chillers and enabling more sustainable heat rejection methods.
  • Liquid cooling is transitioning from an efficiency upgrade to a fundamental system architecture component, supporting hyperscale AI data centers.
  • Industry focus is shifting toward infrastructure readiness and scalable cooling solutions to meet the demands of AI-driven data center growth.

The data center industry has now crossed a threshold where incremental improvements in cooling architecture are no longer enough.

AI workloads have pushed rack densities beyond what legacy mechanical systems were ever designed to handle. What began as targeted deployments of liquid cooling for specialized clusters has rapidly evolved into a wholesale rethinking of how heat is removed from modern facilities.

In this environment, liquid cooling is no longer just an efficiency upgrade. It is becoming foundational infrastructure.

And increasingly, the question operators are asking is not simply how do we cool racks, but how do we architect cooling at data hall and facility scale?

That shift is precisely where DCX Liquid Cooling Systems has focused its recent product and deployment momentum.

Across a series of announcements spanning late 2025 and early 2026, DCX has moved from delivering high-capacity coolant distribution units (CDUs) to effectively redefining the scale and topology of liquid cooling systems through its Facility Distribution Unit (FDU) architecture.

The company’s latest step: an 8-megawatt-class coolant distribution platform optimized for NVIDIA’s next-generation Vera Rubin AI deployments.

AI Density Is Breaking Legacy Cooling Topologies

The challenge is straightforward, even if the solution is not.

AI clusters are compressing unprecedented compute into single racks. Power densities that once seemed extreme in the neighborhood of 30 to 40 kW per rack are now routine. Modern AI training clusters regularly operate in the 60–120 kW range, with early deployments already pushing beyond 200 kW.

Roadmaps from GPU vendors suggest that by the latter part of the decade, individual racks may demand several hundred kilowatts of cooling capacity.

Traditional cooling architectures struggle to keep pace.

Conventional liquid-cooled environments rely on multiple in-row CDUs positioned within white space, each serving a limited group of racks. As deployments scale into multi-megawatt halls, operators end up installing dozens of distributed units, each requiring floor space, maintenance, controls integration, and redundancy planning.

The result is operational complexity, mechanical sprawl, and rising cost, precisely when operators need simplicity and speed.

DCX’s answer is to centralize cooling distribution at facility scale.

From Distributed CDUs to Facility-Scale FDUs

DCX’s Facility Distribution Unit concept replaces numerous row-level CDUs with a single centralized liquid cooling hub located outside the white space.

Instead of dozens of cooling loops with local pumping and heat exchangers, the FDU architecture supports an entire data hall through centralized supply and return loops.

This shift delivers several operational advantages:

• Reduced hardware clutter in white space
• Fewer components near mission-critical IT equipment
• Simplified maintenance and monitoring
• Standardized loop topology across deployments
• Greater flexibility in rack placement independent of cooling proximity

With high pump head capability and loop reach exceeding 50 meters, racks can now be positioned based on power availability and layout constraints rather than cooling limitations.

The concept has moved beyond theory.

In January, DCX confirmed that the first FDUs are already operational in AI data centers in both Europe and the United States, supporting full data halls rather than isolated rack clusters.

Enter the 8MW CDU Era

The next evolution arrived just days later.

On Jan. 20, DCX announced its second-generation facility-scale unit, the FDU V2AT2, pushing capacity into territory previously unimaginable for single CDU platforms.

The system delivers up to 8.15 megawatts of heat transfer capacity with record flow rates designed to support 45°C warm-water cooling, aligning directly with NVIDIA’s roadmap for rack-scale AI systems, including Vera Rubin-class deployments.

That temperature target is significant.

Warm-water cooling at this level allows many facilities to eliminate traditional chillers for heat rejection, depending on climate and deployment design. Instead of relying on compressor-driven refrigeration, operators can shift toward dry coolers or other simplified heat rejection strategies.

The result:

• Reduced mechanical complexity
• Lower energy consumption
• Improved efficiency at scale
• New opportunities for heat reuse

According to DCX CTO Maciek Szadkowski, the goal is to avoid obsolescence in a single hardware generation:

“As the datacenter industry transitions to AI factories, operators need cooling systems that won’t be obsolete in one platform cycle. The FDU V2AT2 replaces multiple legacy CDUs and enables 45°C supply water operation while simplifying cooling topology and significantly reducing both CAPEX and OPEX.”

The unit incorporates a high-capacity heat exchanger with a 2°C approach temperature, N+1 redundant pump configuration, integrated water quality control, and diagnostics systems designed for predictive maintenance.

In short, this is infrastructure built not for incremental density growth, but for hyperscale AI facilities where megawatts of cooling must scale as predictably as compute capacity.

Liquid Cooling Becomes System Architecture

The broader industry implication is clear: cooling is no longer an auxiliary mechanical function.

It is becoming system architecture.

DCX’s broader 2025 performance metrics underscore the speed of this transition. The company reported 600% revenue growth, expanded its workforce fourfold, and shipped or secured contracts covering more than 500 MW of liquid cooling capacity.

Its deployments now support multiple hyperscale projects across Europe and North America, including facilities in the 300 MW class.

These numbers reflect a broader reality: liquid cooling is moving from niche adoption into mainstream infrastructure strategy.

As Szadkowski put it:

“At this scale, liquid cooling is no longer just about removing heat, it’s about system architecture now.”

The AI Factory Demands Facility-Level Cooling

NVIDIA’s platform evolution, culminating in the company's Rubin-class rack systems, reframes the rack as a coherent compute unit rather than a collection of servers.

That shift pushes infrastructure decisions upstream.

Power distribution, cooling loops, and facility topology must now support rack-scale machines operating as unified systems.

Facility-scale CDUs and warm-water cooling strategies directly align with this direction, reducing mechanical complexity while enabling faster deployment cycles.

For operators racing to bring AI capacity online, the combination of simplified plant design, scalable architecture, and reduced reliance on chillers could materially accelerate build schedules.

Cooling Moves to the Center of Infrastructure Strategy

The data center industry is still early in the AI infrastructure cycle. But one lesson is already apparent.

Announcements and capital commitments matter less than execution; and execution increasingly depends on infrastructure readiness.

Power availability remains the gating factor for many markets. Cooling infrastructure is quickly becoming the next constraint.

Vendors able to simplify and scale liquid cooling architectures are positioning themselves at the core of next-generation deployments.

DCX’s facility-scale approach suggests the future of AI data center cooling may look less like incremental upgrades to legacy designs and more like a clean-sheet rethink of how heat is managed at megawatt scale.

In other words, liquid cooling is no longer just supporting AI infrastructure.

It is becoming part of the foundation that makes it possible.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sign up for our eNewsletters
Get the latest news and updates
Leviton Network Solutions
Source: Leviton Network Solutions
Sponsored
Mike Connaughton, Senior Product Manager at Leviton Network Solutions, explains the importance of cabling when transitioning to an immersion cooling system.
Image courtesy of Colocation America
Source: Image courtesy of Colocation America
Sponsored
Samantha Walters of Colocation America shares her thoughts on four trends she's seeing in the colocation space.