Cooling, Compute, and Convergence: How Strategic Alliances Are Informing the AI Data Center Playbook
Key Highlights
- Emerging trends show a shift towards ecosystem alignment among manufacturers, hyperscalers, and infrastructure specialists to meet AI's thermodynamic and power demands.
- Johnson Controls' investment in Accelsius' two-phase liquid cooling technology aims to improve energy efficiency and increase compute density in high-demand AI data centers.
- Prefabricated EcoStruxure Pods from Schneider Electric and Compass Datacenters streamline white space deployment, reducing construction time and operational risks for AI-ready infrastructure.
- Strategic partnerships, such as Schneider Electric with AVAIO Digital and CoreWeave with OpenAI, exemplify the move toward integrated, scalable, and AI-optimized data center ecosystems.
- These innovations collectively enable shorter deployment timelines, higher rack densities, and smarter, predictive infrastructure management, shaping the future of AI infrastructure.
A new pattern is emerging across the digital infrastructure landscape, and it's one that seems to blur the traditional boundaries between power, cooling, and compute.
As AI demand accelerates, data center strategy is apparently evolving from modular construction and sustainability metrics toward something deeper, resembling a coordinated ecosystem alignment between manufacturers, hyperscalers, and infrastructure specialists, each reconfiguring around the physical and thermodynamic realities of the AI era.
The Infrastructure Alignment Era
In the past month alone, three announcements have underscored this shift. Centrally, Johnson Controls, the century-old titan of building systems, has made a multi-million-dollar strategic investment in Accelsius, a leader in two-phase direct-to-chip (D2C) liquid cooling.
Meanwhile Schneider Electric, long known for power systems and modular infrastructure, unveiled new partnerships with Compass Datacenters and AVAIO Digital to accelerate AI-ready deployments across North America.
And CoreWeave, the self-styled “AI Hyperscaler,” has now deepened its relationship with OpenAI to a combined value of $22.4 billion, while acquiring Monolith AI to extend its cloud platform into industrial innovation.
Viewed together, these moves seem to reveal a deeper story. The data center ecosystem is undergoing a rapid phase change of its own, morphing from a network of vendors into an integrated supply chain purpose-built for AI density, time-to-power speed, and thermal precision.
Johnson Controls Bets on Two-Phase Liquid Cooling
“Cooling innovation has become a front-line imperative.” — Austin Domenici, Johnson Controls
On October 6, Johnson Controls announced a strategic investment in Accelsius, an Austin-based pioneer of two-phase, direct-to-chip liquid cooling.
The move positions Johnson Controls squarely at the intersection of next-generation compute and thermodynamic efficiency, a domain increasingly dominated by AI and HPC workloads.
“With the sharp growth in AI, cooling innovation has become a front-line imperative to meet the increasing demands of high-density data centers,” said Austin Domenici, vice president and general manager of Johnson Controls Global Data Center Solutions. “Leveraging our leading capabilities, our mission is to drive the industry forward to unlock new levels of energy efficiency across the cooling chain.”
Accelsius’ two-phase cooling system, known commercially as NeuCool, uses a “phase change” from liquid to vapor to remove heat directly from chips.
According to Accelsius CEO Josh Claman, “Our two-phase, direct-to-chip (D2C) cooling solutions use non-conductive fluids in highly efficient loops to stay ahead of the demanding power-dense AI and HPC workloads. This technology enables 35% OpEx savings over single-phase direct-to-chip and 8–17% total cost of ownership savings.”
The Jacobs Reference Design: Quantifying the Cooling Revolution
The Accelsius–Jacobs reference design marks a watershed in how the data center industry measures and validates the promise of two-phase liquid cooling at scale. Announced in September, the study represents the first industry-accessible two-phase, direct-to-chip (D2C) cooling reference design; a deliberate move by Accelsius to accelerate credible, third-party validation of liquid cooling performance and cost models.
Developed with Jacobs, one of the world’s leading engineering and consulting firms, the concept study was modeled around a 10 MW data center deployed across multiple North American climate zones. Its purpose: to quantify the comparative efficiency, cost, and operational impact of two-phase D2C systems against traditional single-phase and air-cooling architectures.
The results were striking. Under the study’s assumptions, two-phase cooling achieved comparable CapEx to single-phase systems while delivering up to 35% lower annual OpEx and a 12% reduction in five-year total cost of ownership (TCO). More importantly for AI-intensive workloads, the two-phase architecture allowed operators to run roughly 5% more GPUs within the same power envelope — a margin that translates directly into additional compute capacity without additional power draw.
“The rising power demands of data centers make efficient solutions imperative,” said Accelsius CEO Claman. “By releasing this reference design, we’re stepping up to lead the way—accelerating the adoption of cooling technologies that cut energy use while enabling higher compute density.”
The scale of potential savings is enormous. The study estimates that if all planned and under-construction data centers in the Austin–San Antonio corridor alone adopted two-phase D2C, operators could save more than $50 million annually, conserving enough energy to power roughly 330,000 homes each year. Extrapolated across North America’s 52.4 GW data center pipeline, the total opportunity approaches $10 billion in annual energy savings: a tangible decarbonization lever for an industry now consuming measurable percentages of regional grid output.
From Concept to Deployment
Jacobs Vice President of Advanced Manufacturing Sam Larsen underscored the importance of data transparency and validation in driving industry adoption.
“As liquid cooling moves from concept to reality, our clients need credible data and reference concepts to inform their decisions,” Larsen said. “This study gave us the chance to explore two-phase cooling at scale and assess the potential efficiency improvements it can deliver.”
Beyond raw efficiency, the design directly challenges prevailing misconceptions about cost and complexity. The Accelsius MR250 system uses an integrated overhead manifold that ties seamlessly into existing facility water loops, eliminating the need for a separate Technology Cooling System (TCS) or secondary loop. That design choice alone simplifies deployment and reduces mechanical CapEx, allowing facilities already planning single-phase D2C to transition to two-phase systems without major redesigns or new infrastructure classes.
The study’s key technical takeaways are equally significant:
-
Superior Thermal Performance: Two-phase systems can operate with facility water temperatures up to 8 °C higher than single-phase solutions, widening the envelope for chiller-less operation and enhancing free-cooling potential.
-
Enhanced Infrastructure Efficiency: By requiring fewer fans per chiller (12 vs. 16 in the reference case), the two-phase system demonstrated over 35% system-level OpEx savings and reduced heat-rejection footprint.
-
Zero Active Water Consumption: Cooling loops consumed no active water for process cooling — only minimal maintenance for the facility loop — addressing one of the industry’s most visible environmental challenges.
Strategic Transparency and Industry Impact
Unlike most proprietary analyses, Accelsius is making the key findings of the Jacobs study publicly available and providing full design data to qualified operators and engineers upon request. The company’s aim, according to Claman, is strategic transparency: to make credible, third-party data on two-phase D2C performance accessible across the ecosystem and help accelerate adoption where high-density, GPU-driven workloads are already forcing the limits of air-cooled design.
By combining Jacobs’ engineering rigor with Accelsius’ technology platform, the reference design offers what the industry has long needed: a common yardstick for evaluating next-generation cooling strategies against legacy methods, both financially and operationally.
As AI and HPC architectures push toward higher thermal design power (TDP) envelopes and rising chip-level heat flux, that benchmark could define how data center developers plan cooling capacity and power envelopes through the second half of the decade.
In the words of Claman, “With power-dense AI workloads, data centers are moving to liquid cooling. Two-phase systems like ours are not experimental anymore - they’re the next standard.”
From Chip to Chiller: JCI’s Expanding Platform
For its part, Johnson Controls’ investment in Accelsius builds on a string of thermal management advances, including its Silent-Aire Coolant Distribution Unit (CDU) platform, as launched in September.
The CDUs, offering scalable cooling capacities from 500 kW to over 10 MW, are designed to bridge the transition from air to liquid cooling for next-generation AI infrastructure.
“The launch of this expanded series of CDU technology marks a pivotal step in our commitment to advance data center cooling, from chip to chiller,” said Domenici. “By collaborating with leading ecosystem players in the hyperscale, colocation and semiconductor industry, we’ve engineered an innovative and scalable platform that meets the demands of next-generation AI training and inference hardware.”
Johnson Controls’ combined Silent-Aire, York, and M&M Carnot portfolios now serve global data centers from 1.8 million square feet of manufacturing capacity across North America, Europe, and Asia Pacific. With over 40,000 field and service technicians, the company is aligning its capabilities into a unified AI infrastructure supply chain, which the company claims is capable of delivering more than 50% reductions in non-IT energy consumption across major U.S. data center hubs.
In September, Johnson Controls also appointed Todd Grabowski as president of its Americas segment to advance its growth strategy in smart infrastructure. A veteran with over 30 years at the company, Grabowski previously led JCI's Global Data Centers & Applied Equipment unit — another signal that AI and high-density cooling now sit at the center of the firm’s operational map.
Schneider Electric and Compass Datacenters: Prefabrication Meets the AI Frontier
“We’re removing bottlenecks and setting a new benchmark for AI-ready data centers.” — Aamir Paul, Schneider Electric
In another sign of how collaboration is accelerating the next wave of AI infrastructure, Schneider Electric and Compass Datacenters have joined forces to redefine the data center “white space” build-out: the heart of where power, cooling, and compute converge.
On September 9, the two companies unveiled the Prefabricated Modular EcoStruxure™ Pod, a factory-built, fully integrated white space module designed to compress construction timelines, reduce CapEx, and simplify installation while meeting the specific demands of AI-ready infrastructure.
The traditional fit-out process for the IT floor (i.e. integrating power distribution, cooling systems, busways, cabling, and network components) has long been one of the slowest and most error-prone stages of data center construction. Schneider and Compass’ new approach tackles that head-on, shifting the entire workflow from fragmented on-site assembly to standardized off-site manufacturing.
“The traditional design and approach to building out power, cooling, and IT networking equipment has relied on multiple parties installing varied pieces of equipment,” the companies noted. “That process has been slow, inefficient, and prone to errors. Today’s growing demand for AI-ready infrastructure makes traditional build-outs ripe for improvement.”
Inside the EcoStruxure Pod: White Space as a Product
The EcoStruxure Pod packages every core element of a high-performance white space environment (power, cooling, and IT integration) into a single prefabricated, factory-tested unit. Built for flexibility, it supports hot aisle containment, InRow cooling, and Rear Door Heat Exchanger (RDHx) configurations, alongside high-power busways, complex network cabling, and a technical water loop for hybrid or full liquid-cooled deployments.
By manufacturing these pods off-site, Schneider Electric can deliver a complete, ready-to-install white space module that arrives move-in ready. Once delivered to a Compass Datacenters campus, the pod can be connected and commissioned with minimal field labor, dramatically reducing risk and variability.
“Over the past 14 years, we have made huge strides in designing and building data center facilities in ways that are faster, better and more sustainable,” said Chris Crosby, CEO of Compass Datacenters. “Principles like a standard kit of parts, off-site prefabrication and a tightly integrated supply chain have been gamechangers for facility construction. Now, we are applying those same principles to transform the white space fit-out process so that customers can plug in faster and more reliably than ever before.”
The EcoStruxure Pod extends the same modular philosophy Schneider and Compass pioneered for exterior shells and power blocks into the IT environment itself — effectively turning the traditionally bespoke white space into a scalable, repeatable product line.
Technical Design Advantages and Efficiency Gains
The modular pod design yields a number of significant technical and operational advantages:
- Factory Integration: Each EcoStruxure Pod is fully assembled, wired, and pressure-tested in a controlled environment before shipment, ensuring consistent quality and eliminating on-site integration errors.
- Infrastructure Simplification: The design removes the need for raised floors or ceiling grids, as the pod’s internal superstructure supports all network, power, and cooling infrastructure.
- Concurrent Construction: The pod system allows operators to build white space modules in parallel with the main facility shell, enabling “plug-in-place” installation once both are complete — a major accelerator for projects racing to bring AI capacity online.
- Reduced Manpower and Schedule Risk: Factory-built integration means less on-site labor, fewer trade interfaces, and a shorter punch list, minimizing post-installation remediation and delays.
- CapEx and Fit-Out Savings: Initial assessments by Schneider Electric and Compass indicate notable capital expenditure savings compared to traditional site-built fit-outs, driven by reduced manpower, improved installation accuracy, and elimination of redundant components.
Each pod is designed to integrate directly into existing infrastructure through modular busways, pre-engineered cable routing, and standardized manifold connections for liquid cooling loops. That flexibility allows the EcoStruxure Pod to function as a drop-in module for both new builds and phased expansions, supporting incremental scaling without disrupting live environments.
Built for AI-Ready Flexibility
At the design level, the EcoStruxure Pod reflects the broader shift toward AI-optimized data center architecture, where thermal, electrical, and mechanical systems must be co-engineered from the start. By combining Rear Door Heat Exchanger (RDHx) support with InRow cooling and technical water loops, the module is capable of serving a full spectrum of workloads, from legacy CPU racks to ultra-dense GPU pods exceeding 80–100 kW per rack.
“With EcoStruxure Pod, we’re removing bottlenecks and setting a new benchmark for AI-ready data centers — delivering the speed, resilience, and sustainability our customers need to unlock value and lead in a rapidly evolving digital landscape,” said Aamir Paul, President of North America Operations for Schneider Electric.
The collaboration builds on the same engineering principles that Schneider and Compass have used to shorten data center delivery timelines across their portfolios — namely, standardized components, off-site prefabrication, and tight supply chain integration.
Together, these principles enable what Schneider calls “end-to-end space optimization,” a strategy that treats the data hall as an engineered system, not an interior finish.
From Assembly Line to Deployment
Manufacturing of the Prefabricated Modular EcoStruxure Pods has already begun, with the first units scheduled for deployment at a Compass Datacenters campus later this fall. Each pod is factory-certified for safety and performance, then transported as a complete module to site, where Compass integrates the unit into its broader facility power and cooling ecosystem.
The result is a system that merges construction efficiency, design repeatability, and sustainability, attributes now critical to both hyperscale and enterprise AI rollouts. In Compass and Schneider’s model, white space is no longer something just to be built. It’s also something to be delivered.
AVAIO Digital and Schneider Electric: Building AI-Optimized Campuses
A week earlier, on September 3, Schneider Electric announced another major partnership, this time with AVAIO Digital Partners, to deploy next-generation power and cooling systems at four new AI-ready campuses across the U.S.
The collaboration includes advance purchases of switchgear, PDUs, UPSs, and chillers, underscoring Schneider’s growing role as both technology provider and strategic enabler.
“AVAIO has invested significantly in developing our new data center campuses across the U.S. and is now ready to commence construction,” said Mark McComiskey, CEO of AVAIO Digital. “Our partnership with Schneider Electric is a key component of our strategy to develop facilities that provide the next-generation power and cooling needed for successful AI and Cloud infrastructure deployments.”
Schneider’s Vandana Singh, Senior Vice President of Data Center Business, noted that the collaboration “enables customers to deploy their critical IT infrastructure rapidly at the heart of a new generation of data center campuses that uniquely address the needs of AI.”
The partnership integrates Schneider’s EcoDesign principles to reduce embedded carbon, along with condition-based maintenance models and data-driven predictive analytics to extend asset life and reduce downtime, and signifies an industry shift from static equipment deployment to continuously optimized infrastructure.
CoreWeave’s Expansion: The AI Hyperscaler and the Industrial Cloud
“Every leader we meet knows AI can transform their business. What they need are the right tools.” — Brian Venturo, CoreWeave
If Johnson Controls and Schneider Electric are retooling the data center supply chain, CoreWeave is scaling the compute backbone itself. In a year defined by rapid AI infrastructure expansion, CoreWeave has positioned itself as both the AI Hyperscaler™ and the connective tissue between model developers, industrial innovators, and cloud consumers.
On September 25, CoreWeave announced an expanded agreement with OpenAI valued up to $6.5 billion, bringing their total contractual relationship to $22.4 billion.
This follows prior agreements of $11.9 billion (March 2025) and $4 billion (May 2025) — each aimed at powering OpenAI’s next-generation training and inference workloads.
“We are proud to expand our relationship with OpenAI, a company consistently at the forefront of advancing artificial intelligence,” said Michael Intrator, CoreWeave’s co-founder, chairman, and CEO. “This milestone affirms the trust that world-leading innovators have in CoreWeave’s ability to power the most demanding inference and training workloads at an unmatched pace.”
OpenAI’s Peter Hoeschele, VP of Infrastructure and Industrial Compute, added: “CoreWeave has become an important partner in OpenAI’s broader infrastructure platform. By delivering compute at unmatched speed and scale, they’re helping us advance the frontier of intelligence and ensure AI’s benefits reach everyone.”
Extending AI to the Physical World
Just ten days later, on October 6, CoreWeave announced the acquisition of Monolith AI, a London-based firm specializing in applying machine learning to complex physics and engineering problems.
The acquisition marks a significant step beyond hyperscale compute, integrating CoreWeave’s AI cloud with industrial simulation and test-driven ML workflows used by manufacturers such as Nissan, BMW, and Honeywell.
“Every leader we meet across the industrial and manufacturing sectors knows AI can transform their business,” said Brian Venturo, CoreWeave’s co-founder and Chief Strategy Officer. “What they need are the right tools to solve intractable physics and engineering problems. Monolith has closed that gap.”
Monolith’s CEO, Dr. Richard Ahlfeld, added: “Monolith was founded to put AI directly into the hands of engineers. Joining CoreWeave will allow us to scale that mission dramatically — bringing powerful tools and domain expertise to thousands more builders across industries who are eager to use AI but lack the infrastructure and know-how.”
CoreWeave’s momentum has been relentless in 2025. Alongside the OpenAI expansion and Monolith acquisition, it announced a $1.5 billion commitment to UK AI growth and launched CoreWeave Ventures, targeting startups building the next layer of the AI stack, including OpenPipe for reinforcement learning and Weights & Biases for model iteration and experiment tracking.
Convergence and Implications: A New Map for Data Center Strategy
From cooling loops to prefabricated pods to AI-native cloud platforms, these developments illustrate a singular trend: the vertical integration of the AI data center supply chain.
AI’s massive power draw (and its 30–40% cooling energy overhead) is forcing a convergence of formerly siloed disciplines: thermal management, modular construction, power systems, and compute orchestration.
The new generation of partnerships reflects this necessity, merging capital investment with engineering precision and ecosystem coordination.
For data center operators, the implications are profound:
-
Shorter Timelines: Modular prefabrication and pre-integrated systems, like Schneider’s EcoStruxure Pods, are cutting months from deployment schedules.
-
Higher Density: Two-phase cooling innovations are unlocking greater GPU capacity per MW, directly addressing AI rack density.
-
Smarter Operations: Predictive maintenance and condition-based analytics are turning infrastructure from static assets into dynamic systems.
-
Strategic Realignment: Firms once known for “equipment” are now becoming infrastructure partners for AI factories and hyperscale deployments.
As we have chronicled, in the AI era the data center’s future is no longer just about square footage or megawatts; it’s about how efficiently every watt and drop of coolant supports operations. And it's notable how in this “infrastructure alignment era,” companies like Johnson Controls, Schneider Electric, and CoreWeave aren’t just adapting to AI - they’re defining the architecture of its physical world.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.