Building the Thermal Backbone of AI: Tracking the Latest Data Center Liquid Cooling Deals and Deployments
Over the past three years, we've tracked how liquid cooling has moved from the margins of the white space to the critical path of AI data center design. And over the past quarter, the deals and product launches crossing Data Center Frontier’s radar all point in the same direction: liquid is becoming the organizing principle for how operators think about power, density, and risk.
From OEMs and HVAC majors buying their way deeper into liquid-to-chip, to capital flowing into microfluidic cold plates and high-efficiency chillers, to immersion systems pushing out to the edge and even into battery storage, the thermal stack around AI is being rebuilt in real time.
What’s striking in this latest wave of announcements is not just the technology, but the scale and specificity: 300MW two-phase campuses, 10MW liquid-to-chip AI halls at cable landing stations, stainless-steel chillers designed to eliminate in-row CDUs. Here’s a look at the most consequential recent moves shaping the next phase of data center liquid cooling.
Trane Technologies to Acquire Stellar Energy Digital: Buying a Liquid-to-Chip Platform
Trane Technologies is making a decisive move up the liquid cooling stack with its just-announced agreement to acquire the Stellar Energy Digital business, a Jacksonville-based specialist in turnkey liquid-to-chip cooling plants and coolant distribution units.
Stellar Energy’s Digital business — roughly 700 employees and two Jacksonville assembly operations — designs and builds modular cooling plants, central utility plants and CDUs for liquid-cooled data centers and other complex enterprise environments. Trane is clearly buying more than incremental capacity; it’s acquiring a platform that’s already oriented around prefab, AI-era deployments.
Karin De Bondt, Trane’s Chief Strategy Officer, framed the deal squarely around the shift DCF has been tracking all year: data center customers want repeatable, modular systems they can deploy at speed.
“The data center ecosystem is growing rapidly and evolving toward more agile, sustainable solutions, which is where Stellar Energy excels with leading co-engineered, modular solutions and a proven business model,” De Bondt said.
Post-close, Stellar Energy Digital will sit inside Trane’s Commercial HVAC Americas unit but retain its own brand and OEM-agnostic, direct-to-customer sales model — a notable detail for hyperscalers and colos wary of being tied to a single IT vendor. Trane is signaling that it intends to scale Stellar’s modular design, engineering and assembly capabilities not just in data centers, but across other commercial verticals, using its core “bolt-on” playbook to fold a specialist into a global manufacturing and service network.
For operators, this is another data point in a wider trend: the traditional HVAC majors are no longer content to sit at the plant room boundary. They’re buying their way closer to the rack and the chip.
LG and Flex Align on Gigawatt-Scale Cooling Platforms
If Trane’s move is about owning a liquid-to-chip platform, LG Electronics and Flex are betting on co-designed, gigawatt-scale modularity.
Under a new MOU announced last month, LG and Flex will jointly develop integrated, modular cooling solutions to address “escalating thermal management challenges of AI-era data centers.” LG brings high-performance air and liquid cooling modules — CRAC, CRAH, chillers, CDUs and a full monitoring and thermal management suite — while Flex contributes its liquid cooling portfolio, proprietary power products and IT infrastructure platform.
The goal is to give operators Lego-like flexibility to customize and scale. Co-developed solutions will be folded into the Flex AI infrastructure platform, which Flex describes as “the first globally manufactured data center platform integrating power, cooling, compute and services into modular designs.”
Michael Hartung, president and CCO at Flex, put it bluntly:
“Together, we’ll deliver prefabricated, scalable data center infrastructure solutions that incorporate advanced liquid and air cooling technologies to increase efficiency, simplify deployment and speed time to revenue for our customers.”
LG, for its part, is already mapping those ambitions onto real projects. The company has:
-
Secured an AI data center project in Jakarta, one of the largest of its kind in Indonesia.
-
Partnered with DATAVOLT on projects in the Middle East and Africa.
-
Won a contract to supply cooling solutions for a hyperscale data center under construction in North America.
Behind the scenes, LG is also working on next-generation CDUs and cold plate solutions scheduled for completion later this year, with commercialization to follow, and has already carried out a proof-of-concept with LG Uplus for advanced liquid cooling in a telco environment.
For DCF readers, this is another sign that AI-scale liquid cooling is becoming a multi-continent, multi-partner exercise — not just a one-off engineering challenge per site.
DarkNX and Accelsius: 300MW Two-Phase Campus as a Proof Point
On the bleeding edge of liquid cooling architecture, DarkNX and Accelsius are turning what has often been talked about in concept decks into a 300MW reality.
DarkNX has entered into an agreement to deploy Accelsius’ NeuCool® two-phase, direct-to-chip liquid cooling across a new 300MW AI data center campus in Ontario, Canada. The project is expected to be the largest two-phase deployment to date, marking a major validation of next-generation chip-level cooling at campus scale.
The first phase includes two facilities at 65MW each, with deployments slated for 2026 and 2027. The thermal strategy combines:
-
NeuCool two-phase, direct-to-chip cooling with non-conductive refrigerants at the server level.
-
High-efficiency chiller systems from Johnson Controls for facility-level cooling.
By pairing the two, DarkNX is targeting significantly warmer facility water temperatures, more free cooling hours, and a step-change in total cost of ownership.
“Our technology agnostic and efficiency-first approach is why Accelsius’ two-phase, direct-to-chip technology stood out,” said DarkNX CEO Isaac Islam, calling the combined design “the future of AI data center design.”
Accelsius CEO Josh Claman framed the 300MW commitment as a turning point:
“DarkNX’s 300MW commitment signals a clear shift toward large-scale adoption of two-phase, direct-to-chip cooling.”
If the Ontario campus performs as modeled, expect NeuCool-style deployments to show up in more AI factory RFPs — especially in regions where power and water constraints reward warmer water and more aggressive heat recovery.
CDUs Become the Liquid Cooling Control Plane
As AI rack densities spike and liquid snakes into every row, coolant distribution units are apparently emerging as the control plane for how operators orchestrate heat at scale. A series of announcements across OEMs and service providers reinforces that CDUs are no longer commodity plumbing; they’re intelligent infrastructure in their own right.
nVent: Project Deschutes and a CDU-Centric Reference Architecture
At SC25, nVent rolled out a new modular liquid cooling portfolio expressly aligned to “chip manufacturers’ current and future cooling requirements.” The offering includes:
-
Row and rack-based CDUs (AC and DC rack CDUs).
-
Technology cooling system manifolds.
-
Updated racks based on leading reference designs.
-
A new services program wrapped around the hardware.
These CDUs and PDUs share a common control platform to “enhance reliability and improve the user experience” for operators. That shared control layer is where a lot of the value will accrue: it’s the point where telemetry, alarms, and optimization logic converge.
Importantly, nVent is placing its CDU design inside the open hardware ecosystem. The company is participating in Project Deschutes and will exhibit a CDU based on Google’s open specification — Project Deschutes 5.0 — at SC25. Deschutes is intended to accelerate adoption of liquid cooling through standardized CDU designs under the Open Compute Project.
nVent is also collaborating with Siemens on a joint liquid cooling and power reference architecture for hyperscale AI workloads, framing their work as a way to prepare “cooling and power infrastructure for global deployment and operational resilience.”
Carrier: QuantumLeap CDUs From 1.3 to 5MW
Carrier has expanded its QuantumLeap™ data center portfolio with a new family of CDUs rated from 1.3MW to 5MW, explicitly targeting “large-scale liquid cooling deployments” in hyperscale and colo environments.
Key design points for the Carrier CDU:
-
Fewer mechanical and electrical components per unit of cooling to improve uptime and simplify maintenance.
-
Low pressure drops and optimized hydraulic design to improve PUE and minimize energy overhead.
-
Modular heat exchangers capable of approach temperatures as low as 2°C (3.6°F), roughly half the 4°C approaches that have been common in the industry.
Carrier estimates that, with high-efficiency heat exchanger options, operators can see up to 15% chiller energy savings — effectively handing back power budget to the IT load. The units can be deployed in-row or in mechanical galleries, with three-side access and shallow cabinets designed for tight corridors.
The CDU slots into Carrier’s chip-to-chiller stack alongside:
-
Automated Logic building controls.
-
Nlyte® DCIM for hybrid infrastructure management.
-
Carrier air handlers and chillers, including magnetic-bearing screw and centrifugal units.
As Christian Senu, Executive Director for Data Centers, put it, the goal is “end-to-end thermal management from chip to chiller through intelligent cooling, digital controls and predictive monitoring and service.”
Ecolab: Cooling-as-a-Service and the Smart CDU
On the chemistry and analytics side, Ecolab is turning the CDU into a sensor-rich node in a “Cooling as a Service” (CaaS) model.
The company’s new integrated cooling program pulls together:
-
Ecolab’s 3D TRASAR™ water-management technology for direct-to-chip liquid cooling.
-
A smart CDU that embeds that monitoring at the loop level.
-
Connected coolant and software that span from the facility envelope down to the high-performance computing servers.
Josh Magnuson, EVP & GM of Global Water Solutions, described CaaS as “a dynamic hub that integrates cooling and power infrastructure,” designed to give operators “the insights to achieve best-in-class performance” while conserving water and power.
For hyperscalers wrestling with water-stressed sites and ESG scorecards, that integration of chemistry, telemetry, and CDU hardware is likely to be as important as the underlying metal.
NJFX and Bala: Liquid-to-the-Chip at a Cable Landing Station
In New Jersey, NJFX and Bala Consulting Engineers are applying liquid-cooling design principles to one of the most latency-sensitive, connectivity-dense environments in the industry: a cable landing station campus.
NJFX has completed a Basis of Design for “Project Cool Water,” a 10MW high-density AI data hall that will provide 8MW of usable IT load at an expected PUE of 1.25. The Tier III hall will be liquid-cooled and built around an N+1 power distribution system with UPS protection across both electrical and mechanical loads.
The cooling architecture integrates:
-
AFC chillers.
-
CDUs as the bridge between facility water and liquid-to-chip loops.
-
Hot-aisle containment.
-
A fan-wall configuration tuned for GPU-based AI systems.
NJFX calls the hall “the first purpose-built cable landing station campus in North America to support ‘liquid-to-the-chip’ AI-ready infrastructure.” Given the campus’s four subsea cables, more than 35 network operators on-site, and 7ms proximity to over 100 million U.S. residents, this is effectively an AI inference and interconnection hub bolted directly to the trans-Atlantic and South American cable fabric.
The project also carries a regional grid story: NJFX has secured additional power from a utility substation located on campus, with a new transformer sized not just for the 10MW hall but to “enhance electrical redundancy for Monmouth County as a whole.”
Airedale by Modine: Stainless TurboChill DCS and the “No-CDU” Chiller
At the chiller interface, Airedale by Modine has introduced a stainless steel extension of its TurboChill DCS range, explicitly engineered for direct liquid cooling systems.
The stainless design provides:
-
A robust interface to the facility water loop, improving circuit cleanliness and reducing contamination build-up on cold plates.
-
Corrosion-resistant, high-pressure operation to maintain structural stability under extreme thermal loads.
-
More precise filtration and fluid management.
In optimized liquid cooling architectures, Modine notes that TurboChill DCS Stainless can “facilitate the complete elimination of in-row CDUs,” reducing system complexity and freeing floor space for IT racks. The idea: push more functionality into a high-spec chiller so that in-row CDUs can be consolidated or removed where the risk profile allows.
With Seismic Design Category D support, operation up to 55°C (131°F) ambient, and low-GWP R1234ze refrigerant (GWP 1.37), the unit is clearly aimed at hot-climate AI campuses that need both resiliency and an emissions story.
Capital Flows Into the Thermal Layer
Cooling innovation takes capital — and two recent financings highlight how investors and operators are now treating thermal systems as strategic, not peripheral.
Applied Digital Leads $25M Round in Corintis Microfluidic Cooling
Applied Digital, the rapidly scaling AI factory operator known for its data center campus in remote Ellendale, North Dakota, this month announced that it led a $25 million funding round for Corintis, a Swiss-based company specializing in microfluidic direct-to-chip cooling.
Corintis’ technology uses generatively designed microfluidic channels directly at the chip, either as a drop-in replacement for standard cold plates or integrated into the GPU package itself. The company says its approach can:
-
Deliver up to 3x lower chip temperatures versus standard cold plates (validated in prior work with Microsoft).
-
Support much higher power densities at the same or lower thermal risk.
-
Improve energy efficiency and reduce environmental impact by enabling higher coolant temperatures and lower fresh-water consumption.
Remco van Erp, Corintis’ CEO, emphasized the “silicon to infrastructure” positioning:
“Optimization of AI infrastructure requires a holistic approach from silicon to infrastructure. Our technology is designed to meet the thermal challenges of today’s most powerful chips and enable the next generation of high-density, energy-efficient computing in a sustainable way.”
For Applied Digital, which has already inked a $5B partnership with Macquarie Asset Management and long-term leases with an investment-grade hyperscaler and CoreWeave across its Polaris Forge campuses, backing Corintis is a way to shape the cooling roadmap of the chips it will deploy tomorrow.
CEO Wes Cummins described the investment as ensuring Applied stays “at the forefront of data center innovation, from power and cooling to other critical systems that enhance performance, scalability, and efficiency.”
The new funds take Corintis’ total capital raised to $58M, earmarked for expanding its U.S. presence (including a new Bellevue, Washington office), ramping microfluidic manufacturing at scale, and accelerating rollouts with “multiple new tech giants” signed since its Series A.
XNRGY: Growth Equity to Scale “Thermal Backbone of the AI Era”
On the plant side, XNRGY Climate Systems has secured growth equity financing from Decarbonization Partners (BlackRock + Temasek), Climate Investment, and Activate Capital to accelerate its expansion in sustainable liquid and air-cooling technologies.
XNRGY is already a major North American manufacturer of high-efficiency cooling systems with integrated controls and AI capabilities, targeting hyperscale data infrastructure in high-ambient-temperature markets. The new funding will:
-
Expand its U.S. manufacturing footprint.
-
Accelerate deployment of next-generation cooling systems into data centers and other mission-critical facilities.
-
Build on earlier 2023 investments from Idealist Capital and MKB, and a prior Activate Capital stake.
Patrick Yip of Climate Investment called XNRGY “a key thermal backbone of the AI era,” arguing that as data infrastructure scales, “energy-efficient, high-performance cooling becomes mission-critical.”
Recent milestones include:
-
Construction of Mesa 2, a 330,000 square-foot facility dedicated to next-generation air-cooled chillers integrating XNRGY controls and AI with Copeland technology.
-
Four major expansion initiatives in three years, bringing the combined operational footprint in Mesa and Montreal to nearly 1,000,000 square feet.
Decarbonization Partners’ Meghan Sharp highlighted the alignment with “next generation energy investments that accelerate the digital transformation,” while Activate Capital’s Anup Jacob pointed to chillers as “the central, long-lead infrastructure component that defines [data center] thermal performance.”
In short: capital is now flowing not only to GPUs and substations, but to the chillers, CDUs, and microfluidic systems that will determine AI factories’ real efficiency ceiling.
Immersion and Liquid Systems Push Outward: From Halls to Edge and Storage
While direct-to-chip architectures dominate the AI factory headlines, immersion and adjacent liquid systems are quietly broadening their footprint — into EMEA AI and HPC halls, edge closets, containerized pods, and even battery storage.
Vertiv CoolCenter Immersion: 25–240kW Systems for AI and HPC (EMEA)
Vertiv has introduced the CoolCenter Immersion system in EMEA, targeting AI and HPC environments that have “power densities and thermal loads [that] exceed the limits of traditional air-cooling methods.”
The system supports:
-
25kW to 240kW per immersion system.
-
Multiple configurations: self-contained and multi-tank.
-
Integrated CDU, temperature sensors, variable-speed pumps, piping, dual power supplies, and redundant pumps.
With integrated monitoring sensors, a 9-inch touchscreen, and BMS connectivity, Vertiv is clearly positioning CoolCenter as a turnkey system, not a DIY tank. The design also anticipates heat reuse, aligning with broader European policy pressure around energy efficiency and circularity.
Layered on top is Vertiv’s Liquid Cooling Services practice, which spans rear-door heat exchangers, direct-to-chip, and immersion — a recognition that most operators will end up with hybrid architectures and need a single integration and maintenance partner.
GRC ICEraQ Nano + LG/SK Enmove Partnership: Immersion Cooling Scales Both Outward and Upward
At the compact end of the immersion market, GRC (Green Revolution Cooling) has introduced the ICEraQ™ Nano, a 10U plug-and-play liquid-to-air immersion system designed for small data rooms, telco closets, campus IT spaces and remote compute outposts where chilled-water infrastructure doesn’t exist.
The Nano delivers up to 13kW of heat removal with no external plumbing, arrives pre-filled with ElectroSafe® dielectric fluid, and uses an integrated liquid-to-air heat exchanger to reject heat without tying back to a plant chiller. It incorporates an automated fluid management system, an integrated server lift with a top-mounted service tray for hot-swap maintenance, and a 7-inch touchscreen for real-time monitoring — all aimed at reducing operational friction in sites too constrained for traditional mechanical upgrades.
As CEO Peter Poulin noted:
“The ICEraQ Nano delivers immersion cooling that addresses high density challenges customers are now facing — by eliminating the need for chilled water and simplifying overall deployment.”
For operators battling high thermals, noise and power draw in edge closets, the 15-year lifecycle and facility-agnostic deployment model make the Nano a compelling alternative to forcing more air into the same footprint.
But GRC’s roadmap is not confined to edge-scale deployments. In October, LG Electronics signed a memorandum of understanding with SK Enmove and GRC to co-develop next-generation immersion cooling solutions for AI data centers. Under the collaboration, LG will lead system-level integration — including CDUs, facility water units and chiller tie-ins — while SK Enmove contributes next-generation immersion cooling fluids and GRC provides immersion tank architecture, design, and deployment expertise.
The three companies will pursue PoC demonstrations, joint market engagement and new commercial models designed to accelerate adoption of immersion cooling in hyperscale AI environments. The agreement reflects a shared view that submerging servers in non-conductive dielectric fluid can substantially improve PUE, stabilize ultra-dense GPU clusters, enhance heat transfer efficiency and reduce fire-risk exposure, positioning immersion as both a performance and sustainability technology.
LG has already begun expanding its cooling PoC program at its Pyeongtaek test-bed facility, with plans to integrate immersion solutions and develop an LG-branded DCIM platform capable of monitoring both facility-level systems and server thermal behavior in real time. SK Enmove characterized the partnership as a competitiveness multiplier for the global immersion market, while GRC emphasized speed-to-adoption and operational stability as core customer benefits.
LG ES president James Lee framed the initiative as a key pillar in LG’s broader AI-infrastructure strategy: accelerating immersion deployment, scaling efficiency, and delivering cooling systems that meet the power density and environmental requirements of next-generation data centers.
DUG Nomad 40: 1MW+ Immersion-Coiled in a 40-Foot Container
Australian firm DUG is scaling up its containerized immersion concept with the Nomad 40, a 40-foot pod that extends its existing Nomad line.
Highlights:
-
12 DUG Cool tanks, each supporting up to 26RU of immersed hardware at 84kW per tank.
-
Overall PUE reportedly around 1.05.
-
Marketed as “Starlink compatible” for remote connectivity, clearly targeted at remote locations.
DUG has been building immersion-cooled HPC systems for years, with deployments in Houston, Kuala Lumpur, and Perth and a customer base spanning universities, research agencies, and energy firms. The Nomad 40 extends its model into mobile, modular HPC for the edge and remote sites where data must remain local but facilities teams and chilled water are scarce.
The original Nomad 10, a 10-foot container, offers 80kW and can host more than 80 immersion-cooled Nvidia H200 GPUs. With the Nomad 40, DUG is effectively offering 1MW-class immersion capacity on a pad.
Etica: Immersion Cooling for Battery Energy Storage Safety
Finally, immersion isn’t just about IT anymore. Etica Battery, Inc. is applying immersion cooling to one of the most acute risk areas in the energy transition: lithium battery thermal runaway.
Etica’s patented immersion cooling technology for battery energy storage systems (BESS), commercially deployed since Q4 2023, uses:
-
A nonflammable, noncorrosive, nontoxic dielectric oil surrounding each cell module.
-
Continuous temperature monitoring and adaptive cooling.
-
A fail-safe design that isolates a failing cell, preventing propagation.
Extensive test data, the company says, shows that even in total system failure, the dielectric oil absorbs runaway cell heat and prevents further propagation — effectively eliminating fire risk from thermal runaway.
With ISO 9001-certified automated production lines spanning residential (11.7kWh) to industrial container (3.06MWh) products, Etica is positioned to scale the technology into utility-scale storage, industrial microgrids, and residential systems.
CEO Gavin Wang framed the stakes clearly:
“Our Immersion Cooling Technology isn’t just setting a new industry standard for safety — it’s revolutionizing how we approach energy storage.”
For data center operators co-siting BESS with AI campuses, an immersion-cooled, non-propagating battery architecture could materially change both risk profiles and insurance conversations.
The New Thermal Map of AI
Taken together, the moves recounted above indicate and confirm a redrawing of the thermal map for AI infrastructure:
-
HVAC majors and electronics giants are buying or partnering their way into liquid-to-chip and modular, prefab platforms.
-
CDUs and chillers are becoming intelligent, highly engineered control and optimization layers, not just passive hardware.
-
Capital is flowing into microfluidic cold plates and high-efficiency plant equipment as strategic assets, not back-of-house line items.
-
Immersion is maturing into a portfolio: EMEA AI halls, edge closets, mobile HPC pods, and even BESS.
For DCF readers, the throughline is palpable: in the AI factory era, “cooling” is no longer a single line in the spec sheet. It’s a layered stack — from silicon channels to campus chillers and edge closets — where design decisions reverberate through power contracts, grid interconnects, ESG metrics and business models.
We’ll continue to track how these technologies — and the companies behind them — perform in the field. For now, the signal is unmistakable: the real frontier for AI data centers isn’t just how many GPUs you can buy; it’s how intelligently, safely, and sustainably you can move heat.
This recent informative video from technology pioneer Equinix delves into the mechanics of liquid cooling, showcasing its advantages over traditional air cooling methods, and explaining how it enables data centers to support advanced AI applications while optimizing energy use and maximizing server density.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.


