NVIDIA and Partners Define a Repeatable Blueprint for AI Factory Data Centers
Key Highlights
- NVIDIA's reference designs utilize digital twins in Omniverse DSX to model and optimize AI data center infrastructure before construction begins.
- Ecosystem partners like Siemens, Schneider Electric, and Trane provide scalable power, cooling, and control architectures tailored for NVIDIA's high-density AI platforms.
- Designs are focused on industrial-scale, repeatable modules that reduce project risk, shorten deployment timelines, and improve energy efficiency.
- The approach promotes OT/IT convergence, enabling real-time telemetry, safety, and operational control within a unified digital twin environment.
- This industrial model aims to standardize AI factory construction, supporting gigawatt-scale deployments with flexible, scalable infrastructure solutions.
NVIDIA’s latest wave of AI data center reference designs is intended to serve as a practical blueprint for building AI factories at scale. As customers push toward faster deployment and more standardized, turnkey approaches, ecosystem partners including Siemens, nVent, Schneider Electric, and Trane Technologies have introduced aligned data center architectures that map cleanly onto NVIDIA’s own designs, forming complementary pillars for power, controls, and thermal management.
At the center of this effort is an expanding ecosystem of co-engineered reference designs and digital twins, developed in collaboration with these partners and anchored in NVIDIA’s Omniverse DSX “AI factory” framework and its latest GB300 NVL72 and Vera Rubin–class systems.
Why Reference Designs Matter at AI-Factory Scale
Modern AI clusters are driving rack power densities into the ~130–150 kW per rack range (and rising) for GB300 NVL72 and Grace Blackwell–generation platforms. These systems are no longer being deployed as isolated halls, but as integrated campuses spanning 100 MW to multi-gigawatt scales, often described as full “AI factories.” Supporting this level of density and scale requires liquid cooling architectures and power topologies that fall well outside traditional enterprise or hyperscale design patterns.
When each project is designed from scratch, the consequences are familiar:
-
Extended design and permitting cycles.
-
Unintentional vendor lock-in.
-
Elevated risk of under- or over-building electrical and cooling infrastructure.
-
Increased friction with utilities and regulators due to highly customized site designs.
In September 2025, NVIDIA formalized an alternative approach with its AI factory reference designs, centered on a canonical digital twin built in Omniverse DSX. This model integrates IT infrastructure (NVIDIA DGX, GB300 NVL72, and Vera Rubin–class systems) with OT layers including power distribution, cooling systems, building infrastructure, and grid interfaces. The result is a standardized, parameterized blueprint intended to reduce risk and compress timelines at scale.
These NVIDIA reference designs have, in turn, catalyzed a broader ecosystem of co-engineered architectures from operational technology vendors, with power, cooling, and control systems explicitly tuned to NVIDIA’s AI factory model.
By December 2025, three major vendors had announced production-ready reference designs aligned with NVIDIA’s framework:
-
Siemens + nVent – Power distribution, protection, and liquid-cooled rack- and row-level infrastructure designed for 100 MW AI campuses.
-
Schneider Electric – Integrated power and liquid-cooling controls, along with facility-level reference designs for NVIDIA GB300 NVL72 AI halls.
-
Trane Technologies – Full-stack thermal management architectures for gigawatt-scale NVIDIA AI factories, anchored in Omniverse DSX.
The emerging pattern seems obvious: NVIDIA defines the AI-factory “IT spine” and digital twin, while ecosystem partners deliver production-grade power, thermal, and control architectures that can be parameterized and replicated across real-world projects.
The following sections examine how each of these reference designs maps onto NVIDIA’s broader AI factory model.
Siemens + nVent: A 100-MW Template for Liquid-Cooled AI Campuses
Siemens and nVent have introduced a joint reference architecture purpose-built for NVIDIA AI data centers, targeting 100-MW hyperscale AI sites built around large, liquid-cooled clusters such as DGX SuperPOD configurations with GB200-class systems. The premise is straightforward but consequential: AI data centers at this density and scale should be designed less like incrementally denser cloud facilities and more like industrial plants, with corresponding rigor around automation, safety, and resilience.
Under this model, developers can adopt the Siemens–nVent architecture as a standardized baseline, then localize it for utility interconnection, code compliance, and permitting; rather than re-engineering electrical and cooling systems from the ground up for each new site.
According to the companies, the reference architecture centers on several core elements:
Scale and Tiering
-
Designed explicitly for 100-MW hyperscale AI data centers, with liquid cooling as a foundational requirement and tuning for DGX SuperPOD and DGX GB200 deployments.
-
Advertised as Tier III–capable, positioning the design for mission-critical production workloads rather than experimental or pilot environments.
Power and Automation (Siemens)
-
Industrial-grade medium- and low-voltage power distribution, switchgear, and protection systems.
-
Automation and SCADA capabilities integrated with NVIDIA-aligned digital twins via Siemens’ Xcelerator industrial software platform, referred to in the announcement as comprehensive “electrical and automation systems.”
Notably, the architecture normalizes 100-MW blocks, rather than the 10–20-MW increments typical of earlier hyperscale builds, as the baseline unit of capacity for NVIDIA-era AI factories, mirroring what is now appearing in land acquisition, power procurement, and financing RFPs.
Liquid Cooling Infrastructure (nVent)
-
Rack- and row-level liquid cooling systems designed to support NVIDIA’s highest-density AI platforms.
-
A modular approach built around repeatable cooling “building blocks” that can scale from tens of megawatts to hundreds of megawatts as a campus expands.
Alignment with NVIDIA
-
The reference architecture is explicitly built on NVIDIA DGX SuperPOD designs and is intended for direct alignment with NVIDIA’s AI data center roadmap.
-
While not detailed extensively in the announcement, the architecture is expected to integrate with Omniverse-based digital twins, enabling coordinated simulation and interoperability with other NVIDIA ecosystem solutions.
Schneider Electric: Controls-Led AI Hall Designs for GB300 NVL72 Density
Schneider Electric unveiled a new set of AI data center reference designs with NVIDIA on Oct. 6, 2025, centered on two related elements: an industry-first AI infrastructure controls reference design enabling OT/IT interoperability with NVIDIA Mission Control and enterprise applications, and a power and cooling reference design tailored specifically to NVIDIA GB300 NVL72 racks operating at very high densities.
At the core of Schneider’s approach is OT/IT convergence around NVIDIA Mission Control. The controls reference design defines how power management, liquid cooling controls, and facility systems integrate with Mission Control and higher-level enterprise platforms, establishing a unified operational view of the AI factory.
As with other components of NVIDIA’s AI factory framework, this design bridges operational technology—Schneider EcoStruxure, power systems, and cooling infrastructure—with IT systems including NVIDIA platforms, BMS, DCIM, and cloud-based tools.
Schneider notes that the controls architecture is reusable across multiple NVIDIA platforms, extending beyond this specific GB300 NVL72 configuration to support broader Grace Blackwell–class deployments.
In effect, Schneider is codifying what “good” infrastructure visibility looks like for real-time telemetry, control, and safety across high-density NVIDIA AI environments.
Beyond the Control Stack: GB300 NVL72 Data Hall Reference Design
In parallel, Schneider introduced a more concrete data hall reference design aimed at next-generation NVIDIA GPU deployments. This design targets AI factories operating at up to approximately 142 kW per rack, with a specific focus on single-hall deployments of NVIDIA GB300 NVL72 systems.
Importantly, Schneider frames the design as forward-looking rather than static. While optimized for GB300 NVL72 today, it is positioned as a foundation for future NVIDIA Blackwell Ultra–class architectures, anticipating continued increases in rack density.
Schneider is backing this strategy with significant commercial momentum in the U.S. data center market, citing more than $2.3 billion in contracts with operators such as Switch and Digital Realty, alongside targeted acquisitions, notably including Motivair, to strengthen its liquid cooling portfolio.
The GB300 NVL72 reference design spans four primary technical domains:
-
Facility power – Sizing and structuring power distribution to support NVL72-class densities.
-
Facility cooling – Liquid cooling topologies and integration with plant-level systems.
-
IT space – Rack layouts, white-space planning, and containment strategies as required.
-
Lifecycle software – Digital tools to manage, monitor, and optimize infrastructure over time.
Trane Technologies: Reference Design #501 for Gigawatt-Scale Thermal Management
On Oct. 28, 2025, Trane Technologies announced what it describes as the industry’s first comprehensive thermal management system reference design for gigawatt-scale NVIDIA AI factories. Designated Reference Design #501, the architecture is built around NVIDIA’s Omniverse DSX blueprint and is intended to function natively as part of the AI factory digital twin.
According to Trane, Reference Design #501 is engineered for AI factories operating at gigawatt scale and supporting NVIDIA’s latest AI infrastructure. It is the first thermal reference design explicitly tied to Omniverse DSX, allowing the entire thermal system to exist as a fully simulatable digital twin. Trane has also highlighted support for facilities designed to run NVIDIA’s next-generation Vera Rubin–class systems.
Thermal Architecture
The reference design defines an end-to-end thermal stack, including:
-
Chilled water systems, heat rejection strategies, and liquid loop architectures optimized for high-density, liquid-cooled AI racks.
-
Design guidelines that enable expansion from an initial deployment to full gigawatt-scale load without requiring a fundamental redesign of the thermal plant.
-
Performance- and scalability-focused configurations intended to accelerate deployment timelines and reduce thermal design risk.
Trane positions Reference Design #501 as a mechanism for compressing both design and construction cycles, while preserving flexibility as AI factories scale and hardware generations evolve.
In practical terms, the division of labor across NVIDIA’s ecosystem partners becomes clearer at this scale: Siemens and nVent address how to safely deliver and distribute 100 MW blocks of power for NVIDIA AI clusters, while Trane’s Reference Design #501 focuses on how to maintain thermal stability across a full gigawatt of AI infrastructure in a way that scales coherently over time.
Digital Twins as the Operational Core of the Reference Design
Across NVIDIA’s AI factory strategy, the reference design is explicitly embodied as a digital twin. NVIDIA positions Omniverse DSX as the canonical environment in which the AI factory is modeled, while Trane’s Reference Design #501 anchors the thermal plant directly within that same DSX framework, placing cooling infrastructure and IT systems inside a shared simulation environment.
Siemens and Schneider similarly emphasize industrial-grade automation, controls, and lifecycle software that plug into this unified digital twin loop. In each case, operational technology (power distribution, cooling systems, and building controls) is modeled and managed alongside NVIDIA’s AI infrastructure rather than as a separate, downstream layer.
The result is that these reference designs are not static documents or one-time engineering exercises. Instead, they exist as continuously updatable, simulatable models that allow developers and operators to test power and cooling scenarios before construction begins, and to optimize performance over time as workloads shift, GPU platforms evolve, and grid conditions change. Just as importantly, the digital twin provides a shared “source of truth” across multiple vendors on a single campus.
Across their respective announcements, all three partners point to common outcomes enabled by this approach:
-
Improved energy efficiency through liquid cooling and optimized thermal management relative to traditional air-cooled designs.
-
Faster deployment timelines that remain grid-aware, including the ability to co-optimize facility design with utility constraints, distributed energy resources, and the potential for future onsite generation or storage.
-
Reduced energy use and project risk through simulation-driven design, with NVIDIA citing internal and partner studies showing potential reductions of roughly 20% in cooling energy consumption and 30% in overall project timelines.
For regulators and utilities, this distinction matters. These AI factories are being presented not as opaque, always-on loads, but as controllable, optimized industrial systems whose behavior can be modeled, analyzed, and integrated more predictably into broader energy and infrastructure planning.
From Reference Designs to a Repeatable AI Factory Stack
Taken together, these announcements point to a shared objective: establishing a repeatable, NVIDIA-aligned standard for designing and deploying AI factories at industrial scale.
Within this emerging framework, roles are clearly delineated:
-
NVIDIA
Provides the IT blueprints—DGX, GB300 NVL72, and Vera Rubin–class systems—along with the AI factory digital twin environment in Omniverse DSX and the operational layer defined by Mission Control. -
Siemens + nVent
Deliver the 100-MW power distribution and liquid-cooling architecture, industrial-grade automation, and rack- and row-level liquid-cooled infrastructure tuned specifically to NVIDIA’s high-density AI clusters. -
Schneider Electric
Supplies the controls architecture and AI hall reference designs, with a particular focus on GB300 NVL72–class density and deep OT/IT integration through NVIDIA Mission Control. -
Trane Technologies
Contributes the macro-scale thermal management blueprint—Reference Design #501—enabling gigawatt-scale NVIDIA AI factories to be modeled, simulated, and expanded within the Omniverse DSX digital twin.
Taken as a whole, this ecosystem comes close to a full-stack kit for building NVIDIA-era AI factories: one that emphasizes standardization, simulation, and repeatability at unprecedented scale. Compared with even a year ago, it represents a meaningful reduction in design uncertainty and execution risk as AI infrastructure pushes deeper into multi-hundred-megawatt and gigawatt territory.
Conclusion: Toward an Industrial Model for AI Infrastructure
What’s emerging is not a single vendor solution, but a maturing industrial model for AI infrastructure; one in which design intent, physical systems, and operational behavior are tightly coupled from day one.
As AI workloads drive data centers beyond familiar hyperscale boundaries, NVIDIA’s reference designs and its growing ecosystem of power, cooling, and controls partners offer a glimpse of how AI factories may be built going forward: less as bespoke engineering exercises, and more as repeatable, simulatable systems designed to scale with both compute demand and grid realities.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author



