Delta Electronics and the Rise of the AI Infrastructure Stack: How Chip-to-Grid Thinking Is Reshaping AI Data Center Design

Delta Electronics’ Kelly Gray outlines how AI infrastructure is driving a convergence of high-voltage DC power, liquid cooling, microgrids, digital twins, and chip-to-grid systems design as the industry races to build the next generation of AI factories.

Key Highlights

  • Delta Electronics is evolving from component supplier to a system architect, focusing on power, thermal management, and AI infrastructure integration.
  • The industry is adopting 800 VDC architectures for improved efficiency, with Delta supporting rack-level and full data center deployments based on extensive operational experience.
  • Microgrids, on-site power generation, and SSTs are becoming essential for overcoming grid limitations and ensuring reliable AI infrastructure expansion.
  • Digital twins powered by AI enable precise modeling of facility performance, reducing commissioning time and operational costs.
  • Modular and prefabricated infrastructure strategies are shortening deployment cycles, allowing hyperscalers to accelerate AI project timelines.

As the AI data center industry races toward higher rack densities, liquid cooling adoption, and entirely new power architectures, infrastructure vendors are increasingly being pulled out of narrow product silos and into system-level design conversations.

For Delta Electronics, that shift may represent the company’s most consequential moment yet.

On the latest episode of the DCF Show Podcast, Kelly Gray, Senior Director at Delta Electronics, joined Data Center Frontier Editor in Chief Matt Vincent to discuss how the company is positioning itself at the intersection of power, thermal management, microgrids, and AI infrastructure architecture.

What emerged from the conversation was a picture of a company no longer thinking simply in terms of components, but as an increasingly influential systems architect for the AI era.

“The two things that most impact the ability to roll out AI infrastructure at scale are power and thermal,” Gray said. “We finally find ourselves in this incredible position where we're right at the intersection of power, thermal control, and AI.”

From Power Supplies to AI Factory Architecture

Delta’s roots stretch back to the early 1970s, with decades of experience in power electronics and thermal systems. But Gray described a fundamental inversion now underway in data center design priorities.

For years, power systems were often treated as a downstream consideration, with infrastructure fitted into whatever space remained after compute requirements were finalized. AI has reversed that equation.

“Power was an afterthought,” Gray recalled. “Now we find ourselves in this position where we're the first thing people are thinking about.”

That transition aligns closely with broader changes reshaping hyperscale AI infrastructure. Rack densities are rapidly escalating toward 100 kW and beyond. Liquid cooling is becoming mandatory for advanced GPU deployments. And increasingly, the architecture of the facility itself is being designed around the electrical and thermal characteristics of accelerated computing systems.

Gray said Delta’s “chip-to-grid” strategy is central to the company’s competitive positioning in this environment.

“The way that we see ourselves and the way that we're able to vertically balance everything is, we have a lot of information about what's happening inside of the server,” he explained. “Everything outside of that server then has to be built and positioned and designed with the performance of that server in mind.”

That systems-level visibility increasingly pulls Delta into projects years before facilities are operational.

“We’re being pulled in as consultants at the system level,” Gray said. “We're uniquely positioned to intersect customers where they're going to be three years out when that facility is done.”

The 800 VDC Transition Becomes Real

One of the most significant infrastructure conversations now unfolding across the AI sector centers on high-voltage DC distribution.

Following extensive discussion at both Nvidia GTC and Open Compute Project events over the past year, 800 VDC architectures are rapidly moving from experimental discussions into active deployment planning.

Gray was unequivocal about the shift’s momentum.

“800 volt is a very real technology,” he said. “This is coming. It’s happening.”

According to Gray, Delta has spent several years helping lead the transition toward high-voltage DC architectures both inside racks and across entire facilities. The company is already supporting customers exploring rack-level 800 VDC deployments while simultaneously helping architect full DC-distribution data centers.

Importantly, Delta is not entering the transition without operational history. Gray noted the company has already accumulated years of overseas deployment experience and telemetry around DC distribution systems.

“We've been doing that for a lot of years overseas and have a lot of telemetry and a lot of background information to be able to share with customers,” he said.

The significance of this transition extends far beyond power conversion efficiency alone.

As AI clusters scale toward ever-larger GPU fabrics, traditional AC architectures face increasing challenges around conversion losses, copper requirements, and power density constraints. High-voltage DC distribution offers a path toward simplified electrical topologies and more efficient power delivery at extreme scale.

Delta’s recently introduced 2.4 MW Cooling Distribution Unit (CDU) designed for 800 VDC pumps represents one example of how the company is attempting to align both power and thermal systems around that future architecture.

“It just makes sense to have a product that's powered off of an 800 volt rail if that's what you intend to distribute inside of the data center,” Gray said.

Just as importantly, Gray suggested the industry’s voltage roadmap may not stop at 800 VDC.

“We're having conversations with folks about what even higher voltage distribution both in rack and in the data center look like,” he said.

Microgrids, SSTs, and the Search for Power Certainty

If 800 VDC represents the internal electrical evolution of AI infrastructure, microgrids increasingly represent its external energy strategy.

Across the hyperscale sector, developers are confronting a harsh new reality: utility interconnection queues and grid limitations are now directly constraining AI deployment velocity.

For Gray, the future AI campus increasingly revolves around integrated microgrids, on-site generation, energy storage, and solid-state transformers (SSTs).

“This is the thing that gets me out of bed in the morning,” Gray said.

Delta expects to begin shipping solid oxide fuel cell on-site power generation systems around mid-2027, according to Gray. The company sees microgrids as foundational infrastructure for AI facilities operating in power-constrained environments.

“You can't get enough power from the grid,” Gray said. “You have on-site power generation. All of that operates via SST to distribute 800 volts into the data center.”

The architecture Gray described closely mirrors broader “Bring Your Own Power” (BYOP) strategies increasingly emerging across the AI infrastructure sector. Under these models, data center operators combine on-site generation, battery energy storage systems, renewable integration, and intelligent load management into semi-autonomous energy ecosystems.

Delta’s vision extends beyond infrastructure resilience alone.

Gray repeatedly returned to the growing political and community scrutiny surrounding AI infrastructure expansion, particularly around concerns over power consumption, water usage, and environmental impact.

“What Delta really hopes to bring to the table here is an opportunity for AI data centers to be really incredibly good neighbors,” he said.

Gray argued that quiet, low-emission fuel cell generation combined with microgrid architectures could potentially allow excess power to flow back into local grids during periods of lower AI utilization, helping utilities and surrounding communities rather than competing against them.

That positioning reflects a growing recognition across the industry that AI infrastructure expansion may increasingly hinge not only on technical execution, but also on maintaining a durable social license to scale.

Omniverse and the Rise of AI-Driven Digital Twins

Delta is also betting heavily that AI itself can help solve many of the operational and engineering challenges AI infrastructure creates.

Gray described Nvidia Omniverse-powered digital twins as a foundational part of Delta’s data center strategy moving forward.

“Omniverse allows us the opportunity to set up digital twins to model behavior of our equipment in a facility before we go online with it,” he explained.

According to Gray, Delta can now model facility efficiency with remarkable precision before deployment, allowing operators to optimize workflows, commissioning, and operational performance before systems ever enter production.

“We're able to, in a lot of cases with about 99% accuracy, figure out what the efficiency of that facility is going to be,” Gray said.

The implications are significant.

As AI facilities become increasingly complex integrated systems — spanning advanced liquid cooling loops, dynamic power management, GPU fabrics, and automated operational controls — digital twins are rapidly evolving from visualization tools into operational necessities.

Gray also noted that Delta is applying similar digital twin methodologies across industrial automation and building automation systems, helping reduce commissioning complexity and after-sales service costs.

Modular Infrastructure and the Compression of Time

The pressure to accelerate AI deployments is also reshaping construction methodologies.

Hyperscalers and AI infrastructure developers are increasingly attempting to compress traditional 24-month construction timelines toward 12-month deployment cycles or faster.

Gray said Delta has been deeply involved in modular and prefabricated infrastructure strategies for years.

“Delta has sort of been at the forefront of that world,” he said. “I was doing skidded and modular solutions for some of the hyperscalers five or six years ago.”

By moving commissioning and systems integration work into controlled factory environments, Delta believes operators can reduce construction dependencies, improve quality control, and accelerate deployments.

“If you can get that work done in a facility where you've got some quiet and some schedules, you just get a much better result,” Gray said.

The modularization trend increasingly aligns with broader AI factory thinking now taking hold across the sector, where repeatability, manufacturability, and integrated systems delivery become as important as traditional data center engineering itself.

AI Infrastructure’s Next Phase

As the podcast concluded, Gray framed Delta’s long-term strategy around a combination of technological advancement and environmental stewardship.

“We are now and have always been about environmental stewardship,” he said. “We are going to have to have very serious conversations with communities about how we can do a better job of being stewards of the environment as we seek to expand AI.”

Perhaps most notably, Gray argued that AI itself may become one of the industry’s most important tools for solving the infrastructure challenges AI creates.

“One of the things we're most excited about is things like Omniverse and other solutions where we're being allowed to use AI to solve some of the problems that AI may be causing from an architecture perspective,” he said.

For Delta Electronics, the future of AI infrastructure appears increasingly defined not by isolated products, but by tightly integrated ecosystems spanning power, cooling, software, automation, and energy management.

In the emerging AI factory era, that systems-level integration may ultimately prove to be the industry’s defining competitive advantage.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with his family, remaining an active musician in his spare time.

You can connect with Matt via LinkedIn or email.

You can connect with Matt via LinkedIn or email.

Sign up for our eNewsletters
Get the latest news and updates
Stream Data Centers
Source: Stream Data Centers
Sponsored
Stream Data Centers' Eric Closson explains the importance of grounding data center design and development in reality.
AdobeStock, courtesy of Schneider Electric
Source: AdobeStock, courtesy of Schneider Electric
Sponsored
Schneider Electric's Carsten Baumann explains why the shift to AI factories demands a fundamental rethinking of power architecture, digital design, and energy intelligence.