Schneider Electric Maps the AI Data Center’s Next Design Era

In conversations at NVIDIA GTC 2026 (Mar. 16-19), Schneider Electric executives Marc Garner and Jim Simonelli outlined how AI infrastructure is forcing a deeper rethink of data center design, from digital twins and liquid cooling to grid interaction, onsite power, storage, and the coming shift toward higher-density electrical architectures.
April 1, 2026
23 min read

Key Highlights

  • AI data centers are evolving into fully modeled systems, requiring advanced simulation of energy, cooling, and operational variables before deployment.
  • Digital twin technology is shifting from visualization to a core design and operational tool, enabling cause-and-effect analysis across the data center lifecycle.
  • Electrical infrastructure is becoming more critical, with a focus on power quality, load smoothing, fault ride-through, and ramp-rate management to support AI workloads.
  • Onsite power generation, especially gas turbines, is increasingly vital for large AI facilities, with long-term potential for renewable integration via storage and energy intelligence.
  • The industry is moving toward higher-voltage DC distribution at high densities to optimize space, efficiency, and performance within dense racks.

SAN JOSE, Calif. — NVIDIA GTC 2026 made one thing unmistakably clear: the AI data center is fundamentally not just a bigger version of the cloud facilities that came before it. It is becoming something more integrated, more dynamic, and in some respects more industrial.

That was the throughline in a pair of conversations DCF had at GTC with Schneider Electric executives Marc Garner, the company’s Global President for Cloud & Service Providers, and Jim Simonelli, SVP and CTO of Schneider Electric’s Secure Power division. Taken together, their comments offered a revealing look at how one of the industry’s most deeply embedded infrastructure suppliers is thinking about the next phase of AI buildout.

The AI data center enters its systems era

The message was not limited to higher rack densities, or liquid cooling, or the familiar refrain that power is now the gating factor. Those issues were all there. But Schneider’s executives were describing a broader transformation. The AI data center is evolving into a fully modeled energy-and-compute system, one whose electrical behavior, cooling interactions, operational variables, and even grid impacts increasingly have to be simulated before the facility is built.

That is a meaningful departure from the legacy data center playbook.

If the first phase of the AI boom was about proving demand and pushing GPU clusters to unprecedented scale, the next phase looks increasingly like a test of systems engineering. Speed still matters. Scale still matters. But the ability to coordinate power, cooling, compute, storage, and operations into a stable and repeatable design may matter most of all.

“We can’t build at this scale with trial and error”

Garner’s framing at GTC was broad and strategic, but it kept circling back to a single point: the industry has entered a moment where traditional methods of design and deployment are no longer sufficient.

What struck him, he said, was the sheer scale of what is now being attempted across the industry, and the pressure to deliver infrastructure fast enough to match the pace of AI compute development. That is where Schneider sees digital twin technology moving from an optimization layer to a central design requirement.

Garner described Schneider’s view of digital twin not as a visualization gimmick, but as a way to understand cause and effect across the data center lifecycle: what happens when temperatures shift, when redundancy assumptions change, when power distribution topologies evolve, or when new compute architectures force new cooling and electrical choices.

“We can’t build to the level we’re doing at the moment with the level of resources that are focused on this data center market,” Garner said. “Digital twinning and being able to look at the infrastructure requirements and forward plan of cause and effect and how that’s going to affect an operation of a data center, how we reduce the intrusive maintenance within a data center when it’s in operation, becomes a really key critical aspect of the future of data center deployment, data center design.”

That is not a minor statement. It suggests that for Schneider, the next great constraint is not simply utility power or cooling capacity in isolation. It is complexity itself.

As AI racks push higher, cooling topologies change, power distribution grows more intricate, and deployment windows remain brutally compressed, the old idea of building first and working out the details later starts to break down. The cost of mistakes rises. The room for operational improvisation shrinks. The interactions among subsystems become too consequential to leave to trial and error.

That is the terrain on which Schneider is placing its software and digital engineering bet.

Digital twin moves from concept to deployment tool

The GTC context for this discussion was Schneider’s work around NVIDIA Omniverse, integrated with Schneider’s own AVEVA and ETAP platforms.

Garner described AVEVA as the software side of Schneider’s business that enables customers to simulate the design and operation of a data center. AVEVA has long been established in industrial and manufacturing environments, he noted, but Schneider is now adapting that capability to the emerging needs of AI data centers. Combined with ETAP, Schneider’s electrical design and engineering platform, the stack is intended to provide an end-to-end digital twin, spanning design, simulation, and operation.

That matters because liquid cooling and high-density AI compute are not simply changing one or two components inside the data center. They are changing the topology of the facility itself.

Whether a deployment uses a single CDU or an N+2 redundancy configuration, Garner said, the software can model coolant flow, simulate thermal behavior, and show how those decisions ripple through the rest of the infrastructure. The same applies to power and thermal setpoints. Raise a temperature by a couple degrees, for example, and the question is no longer isolated to cooling performance. It becomes a systems question, affecting other infrastructure elements as well.

His summary of the value proposition was succinct: the software “takes the uncertainty out of design and operation.”

That phrase gets at a larger point. In the AI era, uncertainty is expensive. The density and capital intensity of these deployments mean operators have less tolerance for guesswork, less appetite for disruptive maintenance, and less room to discover late in the process that thermal, electrical, or mechanical assumptions do not hold at production scale.

The data center as a system that must be modeled before it is built

Simonelli took that same digital twin theme and gave it a deeper engineering frame.

In his telling, the Omniverse environment is not really the place where core engineering work originates. It is the shared space where data center designs can be visualized, manipulated, and connected across domains. The actual authoring and engineering happens in purpose-built tools like AVEVA, which can dimension piping, plumbing, and process systems, and ETAP, which handles electrical simulation. Omniverse then becomes the environment where these different models come together in a living representation of the facility.

That is an important distinction. Schneider is not simply trying to create a prettier viewport. It is trying to create continuity from detailed design through simulation into operations.

Simonelli described the workflow this way: take a design blueprint, bring it into AVEVA for real engineering changes, return it to Omniverse for visualization and coordination, and then link it to specialized simulation engines for CFD, power behavior, or process flow. From there, the model can be compared against live operational data, with AVEVA serving as a large-scale operating environment for a facility that may contain tens of thousands or even millions of data points.

“You want to simulate this before you put a shovel in the ground,” Simonelli said in effect, describing how operators can test whether power variations, cooling interactions, and system behavior really work before construction begins.

That was one of the clearest statements of the week. It captures the industrial logic now taking hold in AI infrastructure. The facility itself is becoming too complex, too expensive, and too tightly coupled to be treated as a simple shell for compute. It has to be designed as a system in advance.

Not just DCIM at a larger scale

Simonelli also drew an important line between this new software stack and the more familiar category of data center infrastructure management.

AVEVA, he indicated, is not simply a supersized DCIM platform. It is a different class of tool aimed at a different class of facility. For smaller data centers, traditional DCIM remains adequate and in some cases more appropriate. But at gigawatt scale, or even at the multi-hundred-megawatt AI campus scale now coming into view, Schneider believes operators need something closer to an industrial operating environment than a conventional facility monitoring platform.

That distinction matters because it clarifies what Schneider thinks is changing about the data center itself. The company is not just adding more data points to the same basic operating model. It is responding to a shift in the nature of the asset.

The AI campus increasingly resembles a coordinated industrial system, where electrical behavior, fluid behavior, compute density, and operational telemetry all have to be understood together.

The inference inflection point broadens the buildout

Garner’s larger demand thesis at GTC also deserves attention because it helps explain why Schneider is talking this way now.

Training has dominated the infrastructure narrative over the last two years, but Garner argued that inference is where the broader opportunity lies. The demand for data, he said, remains unprecedented, and inferencing expands the scope of deployment by driving lower-latency, more distributed infrastructure needs.

“The training aspect of the business is where there’s been so much of the focus and so much of the attention over the last couple of years,” he said. “But it has to move to that inferencing model.”

That movement matters because it reframes the scale story. Training clusters may remain the headline-grabbers, but inference changes the breadth of the market. It pulls AI into more applications, more locations, and more day-to-day use cases. It also creates another layer of demand on top of still-growing traditional cloud workloads.

Garner was explicit on that point. Whatever the pace of AI infrastructure growth, cloud demand has not gone away. Enterprise customers are still consuming traditional colocation and cloud capacity at a rapid clip. In his view, the industry is dealing not with a single wave of demand but with layered demand: legacy cloud growth, AI training, and the coming surge of inference.

That is why he showed little concern with arguments about a broader AI infrastructure bubble. “The demand for data isn’t going to go away,” Garner said. “We still don’t have enough compute.”

His reasoning is straightforward and persuasive. Efficiency gains in AI do not necessarily reduce infrastructure demand. In many cases they accelerate application adoption and broaden the market, which in turn drives more inference demand and more infrastructure need. The more accessible AI becomes, the more the supporting infrastructure problem expands.

Why power quality is becoming a central design issue

If Garner laid out the market and design backdrop, Simonelli delivered one of the most technically revealing portions of either conversation: a detailed explanation of how AI workloads are changing the electrical relationship between the data center and the grid.

His point was not simply that AI data centers consume more power. It was that they behave differently. Simonelli described three grid-facing roles that Schneider now sees as essential for UPS and storage systems in the AI era.

The first is load smoothing. AI training workloads can produce significant load variations, and Schneider is embedding capabilities into its UPS platforms to smooth those variations so they do not feed instability back to the grid. According to Simonelli, that capability did not exist in this form a couple of years ago. Now it is being built into Schneider’s product stack and, in some cases, enabled via firmware updates on existing systems.

The second is fault ride-through. If a transmission disturbance causes voltage collapse or a brief power fault on the grid, the facility cannot simply drop away from the grid in a way that creates a second destabilizing event when service returns. Storage and UPS systems now have to help maintain more stable pre- and post-fault behavior, reducing the facility’s contribution to grid disturbance.

The third is ramp-rate management. If a compute fault causes a major AI facility to stop and then restart, operators cannot shock the grid by dropping hundreds of megawatts or a gigawatt of load in seconds and then bringing it all back at once. Storage, again, becomes part of the answer, absorbing and controlling that transition.

Those are significant changes. In the legacy view, UPS systems were backup protection tools. In Simonelli’s telling, they are becoming something closer to active grid-balancing assets.

That is a profound shift in the role of data center electrical infrastructure. It suggests that the AI data center, particularly at gigawatt scale or in conjunction with onsite generation, is starting to behave less like a passive consumer of power and more like an active participant in power system stability.

Onsite generation magnifies the need for control

Those grid dynamics become even more important when onsite generation enters the picture.

Simonelli made clear that Schneider sees a surge in onsite power activity around large AI facilities, far beyond what existed just a few years ago. That is not surprising. Across the market, the difficulty of securing timely utility power has made onsite or behind-the-meter generation an increasingly serious part of the conversation.

But his comments were notable for their emphasis on stability. Once a facility introduces onsite generation, especially turbine-based generation, variability becomes even less tolerable. If AI workloads are producing sharp power swings, and those swings are being served in part by local generation, then storage and controls become essential to keeping the system stable.

This is where Simonelli’s power engineering discussion connects directly to Schneider’s digital twin push. If these facilities are going to combine utility interconnection, onsite gas generation, storage, AI training variability, and advanced cooling systems, then the interactions among those elements have to be understood in advance.

That is exactly the kind of problem digital twin and simulation are meant to solve.

“Getting heat out” is no longer the hardest problem

Another striking point from Simonelli was his view that cooling, while still challenging, is no longer the industry’s biggest unknown.

“Three years ago,” he said in essence, “getting heat out of the rack was the challenge. Now it’s getting power into the rack and into the site.”

That line deserves to linger.

For the last two years, the AI infrastructure conversation has been saturated with liquid cooling: direct-to-chip, coolant distribution, warm water, rear-door heat exchangers, CDU configurations, thermal envelopes, and rack densities that demolished the assumptions of the air-cooled era.

None of that has disappeared. But Simonelli’s argument is that the industry now increasingly understands liquid cooling as an engineering problem. Difficult, yes. Supply-constrained in places, yes. But no longer the primary unknown.

The more acute constraint is now electrical. That means not only access to grid power and transmission infrastructure, but the internal challenge of moving power into ever denser racks and doing so in a way that leaves room for the compute itself.

The coming shift to higher-voltage DC

That internal power challenge led Simonelli to one of the most consequential architectural topics in the interview: the likely transition toward higher-voltage DC distribution at very high rack densities.

He framed it pragmatically. At current density levels, the industry knows how to get power into racks at 200 or 300 kilowatts. But as densities rise toward 400 kilowatts and beyond, conventional AC approaches start to run into physical limits. Too much cable, too much copper, too much conversion equipment, and too much space consumed by power infrastructure rather than GPUs.

At that point, he said, higher-voltage DC becomes attractive not for philosophical reasons, but because it reduces current, shrinks conductor size, saves space, and leaves more room for revenue-generating compute.

“It is again a paradigm shift,” Simonelli said of DC power at these densities. “But it won’t be everywhere.”

That is probably right. The transition will not be universal, and the exact thresholds will evolve. But his underlying point is powerful. As rack densities climb, electrical architecture starts to matter not only for efficiency and reliability, but for physical space allocation inside the rack. Put differently, power distribution becomes a compute-enablement issue.

Distance between accelerators matters, too. The closer GPUs and TPUs can be kept together, the better they perform. If power infrastructure can be compacted, more of the rack can be devoted to dense compute, improving the economics and performance of the system.

That is a strong example of how AI is collapsing traditional boundaries between facility engineering and compute architecture. The two are no longer cleanly separable.

Gas now, renewables over time

On onsite power, Simonelli was refreshingly direct. If the goal is dispatchable onsite generation at the scale now being contemplated for AI facilities, he said, “there really isn’t an alternative other than gas” today.

That is not a dismissal of renewables. Rather, it is a statement about near-term practicality. Gas turbines are the technology that can currently provide the kind of reliable, scalable onsite generation large AI campuses need in the timeframe the market is demanding.

At the same time, Simonelli offered a more nuanced long-term view. He sees data centers with storage-rich electrical architectures as potentially enabling more renewable penetration on the grid. If large data centers can absorb overproduction from solar or otherwise use storage to interact more flexibly with the grid, they may eventually become more compatible with renewable-heavy systems rather than simply competing with them for capacity.

That perspective is worth noting because it moves beyond the simplistic binary of gas versus renewables. In Schneider’s view, gas is the practical bridge for large-scale onsite generation now, while storage-enabled flexibility could help data centers support a more renewable grid over time.

BESS as strategic AI infrastructure

Battery energy storage also occupies a much more central place in Simonelli’s thinking than the industry sometimes grants it.

In his account, BESS is not just an ancillary asset or backup adjunct. It is becoming strategic infrastructure for AI data centers, serving multiple roles at once: smoothing load swings, stabilizing onsite generation, controlling ramp rates, riding through grid faults, and potentially enabling more flexible use of renewable power.

The same basic storage technologies, he noted, can be applied across different contexts. What changes more than the chemistry is the control logic and the way those systems are used.

That too reflects the larger systems theme running through both interviews. The future AI data center is not being built around one hero technology. It is being built around coordination among power electronics, storage, controls, software, cooling, and compute.

Schneider’s energy-native AI ambition

One of the more forward-looking aspects of Simonelli’s interview was his description of Schneider’s thinking around infrastructure-native AI and “energy intelligence.”

He spoke of a developing data layer capable of ingesting power, cooling, and service data into a single environment, one designed to be native to AI and fluent in the language of buildings and energy systems. He described the idea as something akin to a foundational model, but for energy and infrastructure rather than text or documents.

That distinction matters.

Generic models can summarize alarms or describe events. But they do not necessarily understand what a pressure variation on a thermal control system loop means, or how it relates to cooling behavior, power draw, and service conditions. Schneider’s ambition appears to be to build a model that does understand those things because it has been trained on the relevant infrastructure domain.

That creates a useful distinction between agentic frameworks and domain intelligence. Simonelli made clear that Schneider does not necessarily need to own the general-purpose agent shell. It is comfortable using open or partner-driven agentic systems, including NVIDIA’s own tooling, where appropriate. What Schneider wants to own is the infrastructure-native intelligence beneath those frameworks.

That may prove to be an important strategic position. As AI operations tooling proliferates, the companies that succeed may not be the ones with the most generalized AI wrapper, but the ones with the deepest domain-specific understanding of the systems under management.

Omniverse as the common layer

Simonelli also offered a helpful perspective on a question many visitors to GTC were likely asking: if Schneider, Siemens, Cadence, and others all appear to be building around digital twin and Omniverse, does that turn the space into a circular firing squad?

His answer was more nuanced.

Yes, the companies continue to compete in authoring tools, engineering environments, and simulation capabilities. But Omniverse, in his view, is emerging as a shared cross-domain layer where designs can be visualized and understood together. Customers do not want isolated viewports for electrical, mechanical, robotics, and facility domains. They want one environment where all those elements can be seen in relationship.

That does not eliminate competition. It changes where some of it happens.

In other words, Schneider may compete with Siemens or Cadence in the underlying engineering stack while still participating in a common interoperability and visualization environment on top. For customers grappling with multi-domain AI infrastructure design, that may be not only acceptable but necessary.

Reference designs and the industrialization of deployment

Garner, for his part, also emphasized Schneider’s ongoing work with NVIDIA on reference designs.

That is more than partnership theater. In a market rushing toward repeatable deployment at extraordinary scale, reference architectures become a way to compress engineering cycles, reduce uncertainty, and standardize around known-good infrastructure patterns.

Schneider’s role here is revealing. The company is not only supplying gear. It is helping encode infrastructure assumptions into reusable templates aligned with NVIDIA’s compute roadmaps.

That is another sign that AI infrastructure is becoming more industrialized. Standardization is no longer the enemy of innovation. It is increasingly one of the prerequisites for speed.

A feature of the next era: hybrid AI factories

Another subtle but important point from Simonelli concerns workload mix.

He suggested that even at the largest scale, these AI factories are unlikely to remain purely monolithic training environments forever. Instead, they may evolve toward hybrid configurations, with different kinds of compute serving different kinds of workloads inside the same broad campus environment.

That aligns with broader signals from GTC, where training still dominated much of the spectacle but inference, specialized compute, and token economics increasingly shaped the underlying conversation.

The implication is that the future AI campus may not be a single-purpose machine. It may be a layered compute estate, with infrastructure choices increasingly influenced by workload diversity as well as density.

The deeper lesson from Schneider at GTC

What Schneider Electric brought to GTC 2026 was not simply a list of products or a set of partnership announcements. It was a coherent view of where the AI data center is headed. That view has several core components.

First, AI infrastructure has entered a simulation-first era. The data center must increasingly be modeled before it is built, because the interdependence among power, cooling, storage, and compute has grown too consequential to manage reactively.

Second, the electrical behavior of AI workloads is changing the relationship between the data center and the grid. UPS and storage are evolving from backup assets into dynamic control systems for load smoothing, fault ride-through, and ramp management.

Third, cooling, while still transformative, is no longer the only or even the hardest engineering problem. Power availability, power delivery, and electrical topology now loom larger, both at the site level and inside the rack.

Fourth, the industry is moving toward a more integrated energy-and-compute architecture, one in which onsite generation, storage, grid interaction, and high-density distribution become part of the same design conversation.

And fifth, the operating layer of the future data center will likely be shaped not just by generalized AI, but by infrastructure-native intelligence trained to understand the physics, signals, alarms, and interactions of real facilities.

For Data Center Frontier readers, the significance of that message is hard to miss.

The AI data center has not merely outgrown the traditional enterprise facility. It is beginning to outgrow the conceptual boundaries of the data center itself. It is becoming an engineered system of systems, one part compute platform, one part power system, one part thermal machine, and one part digital model.

That may be the most important lesson from Schneider Electric’s presence at GTC this year. The next phase of AI infrastructure will not be won simply by whoever can procure the most GPUs. It will also be shaped by who can best design, simulate, power, cool, and operate the vast physical systems required to make those GPUs useful at industrial scale.

And in San Jose last month, Schneider made clear that it sees that systems challenge as the real frontier now opening.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with his family, remaining an active musician in his spare time.

You can connect with Matt via LinkedIn or email.

You can connect with Matt via LinkedIn or email.

Sign up for our eNewsletters
Get the latest news and updates
ZinetroN/Shutterstock.com
Source: ZinetroN/Shutterstock.com
Sponsored
Michael Lawrence of Leviton outlines four key subsystems that often required tailored solutions in an AI data center and the challenges data centers face with AI builds: the Entry...
Image courtesy of Integrated Environmental Solutions
Image courtesy of Integrated Environmental Solutions
Sponsored
Mark Knipfer of Integrated Environmental Solutions (IES), explains why data center cooling strategies should be designed for reality, not extremes.