The Trillion-Dollar AIDC Boom Gets Real: Omdia Maps the Path From Megaclusters to Microgrids

At Data Center World, Omdia analysts argued that the AI infrastructure buildout is no longer just a hyperscaler story. It is becoming a broader, more power-hungry transformation that is reshaping everything from rack design to onsite generation and long-duration battery storage.
April 23, 2026
14 min read

Key Highlights

  • AI is now the primary driver of data center investment, with forecasts for 2030 increasing beyond $1.6 trillion due to rising AI usage and new market demands.
  • Demand for GPU clusters, web servers, and database chips is growing rapidly, leading to a more distributed AI infrastructure across various customer segments.
  • Power architecture is undergoing a revolutionary transformation, with high-voltage DC, onsite generation, and microgrids becoming central to future data center design.
  • Battery energy storage systems are increasingly integral, supporting not just backup but also grid interaction, peak shaving, and operational independence.
  • The physical size and power density of AI racks are expanding toward 2 MW configurations, challenging traditional cooling and electrical systems, and requiring innovative engineering solutions.

The AI data center buildout is getting bigger, denser, and more electrically complex than even many bullish observers expected.

That was the core message from Omdia’s Data Center World analyst summit, where Senior Director Vlad Galabov and Practice Lead Shen Wang laid out a view of the market that has grown more expansive in just the past year. What had been a large-scale infrastructure story is now, in Omdia’s telling, something closer to a full-stack industrial transition: hyperscalers are still leading, but enterprises, second-tier cloud providers, and new AI use cases are beginning to add demand on top of demand.

Omdia’s updated forecast reflects that shift. Galabov said the firm has now raised its 2030 projection for data center investment beyond the $1.6 trillion figure it showed a year ago, arguing that surging AI usage, expanding buyer classes, and the emergence of new power infrastructure categories have all forced a rethink.

“One of the reasons why we raised it is that people keep using more AI,” Galabov said. “And that just means more money, because we need to buy more GPUs to run the AI.”

That is the simple version. The more consequential one is that AI is no longer behaving like a contained technology cycle. It is spilling outward into adjacent infrastructure markets, including batteries, gas-fired onsite generation, and high-voltage DC power architectures that until recently sat well outside the mainstream data center conversation.

A Market Moving Faster Than the Forecasts

Galabov opened by revisiting the predictions Omdia made last year for 2030. On several fronts, he said, the market is already validating them faster than expected.

AI applications are becoming commonplace. AI has become the dominant driver of data center investment. Self-generation is no longer a fringe strategy. Even some of the rack-scale architecture concepts that once looked speculative now appear far more likely to materialize.

“We’ve raised our forecast,” Galabov said. “We raised it twice.”

The reason, he said, is not just continued hyperscaler aggression. It is also the widening of the customer base. In the first wave, the hyperscalers drove most of the investment. Now Omdia is seeing “tier 2 cloud service providers” and enterprises begin to deploy meaningful AI capacity of their own.

That broadening demand base was a repeated theme throughout the session. Galabov described a supply chain already being leaned on hard by hyperscalers, recounting a recent global tour in which he and colleagues met with OEMs, infrastructure vendors, and component makers across multiple countries.

“Pretty much everywhere I went, the hyperscalers had beat me to it,” he said. “They had been there a couple of weeks before and they said that, you know, what you need is to increase your capacity, your manufacturing capacity. Like we need more.”

That pressure is now visible in backlog growth, order patterns, and infrastructure planning assumptions.

AI Demand Is Becoming Additive

One of the strongest sections of Galabov’s presentation focused on why AI demand is climbing faster than many infrastructure models assumed.

The first driver is familiar: reasoning models consume far more tokens than earlier generations. But he argued the next stage is more important. Agentic AI, and eventually physical AI, do not replace earlier demand modes. They stack on top of them.

“This is not happening one after the other,” Galabov said. “This is additive. This is happening simultaneously.”

That additive effect matters because it spills beyond GPU clusters into surrounding server and storage demand. Galabov pointed to the secondary effects of AI agents interacting with web services and databases, creating additional demand for general-purpose compute as well as accelerators.

“Now it’s not just humans surfing the Internet, we’re also having a bot surfing the Internet,” he said. “Then as a result, there’s more demand for web servers. Now there’s not just humans accessing databases, we’re having my OpenClaw accessing databases. Now there’s more demand for database chips.”

That shift, he argued, is already showing up in Nvidia’s order mix. What was once an 80/20 split between hyperscalers and the rest of the market has, in Omdia’s telling, moved closer to 60/40 and could approach 50/50 within a few years.

If that happens, the AI boom becomes not just bigger, but more distributed.

The Rack Is Becoming the Story

The physical manifestation of all this, as Galabov framed it, is the accelerating power density of AI racks.

He walked through Omdia’s view of Nvidia’s roadmap, arguing that rack-level power is moving from roughly 20 kW in earlier DGX-style configurations toward 200 kW with Rubin-class systems, and then higher still with Rubin Ultra, Feynman, and follow-on generations. By the end of the decade, he suggested, the industry could plausibly be looking at 2 MW racks.

“That really does mean that what we’ve seen so far is just the very beginning of a transition,” he said.

The significance of that statement is hard to overstate. A move from tens of kilowatts per rack to hundreds of kilowatts, and eventually toward the megawatt threshold, does not simply require more cooling. It begins to invalidate portions of the traditional data center electrical stack.

Galabov acknowledged that some of the next-generation systems will be difficult to deploy, especially in their early iterations. But he argued the industry is already learning how to engineer through those constraints, much as it did with Blackwell-era systems.

“I am quite optimistic about this being able to happen,” he said.

Monetization Is Catching Up With Usage

Omdia tied its infrastructure optimism not just to hardware roadmaps but to revenue growth in frontier AI services.

Galabov said Omdia had raised its monetization forecast several times this year alone, noting that the major model providers have moved from making roughly $14 billion in a year to generating more than that in a month. He used that growth to argue that infrastructure demand is not just speculative overbuilding. It is increasingly tethered to real and rising consumption.

“The truth is that really, at the moment, everywhere where we could, as analysts, we have taken a pessimistic view,” he said. “This is the pessimistic view.”

That line captured the tone of the session. Omdia’s analysts were not trying to sound restrained for effect. Their point was that even their conservative cases still imply a much larger market, faster infrastructure turnover, and a more radical reconfiguration of power systems than the industry was talking about even a year ago.

Power Is No Longer a Supporting Topic

If Galabov laid out the scale of the AI build, Shen Wang made the case that power architecture is now evolving just as quickly as compute.

Compared with earlier Omdia summits, Wang said, this year’s discussion is fundamentally different because the story has shifted from incremental change to structural redesign.

“Power side, it’s not just evolving,” Wang said. “It’s a revolutionary transformation. It’s crazy.”

His presentation centered on four interlocking developments: high-voltage DC, onsite power generation, microgrids paired with battery energy storage systems, and a rethinking of backup power architecture.

The key idea was that traditional data center power design assumed a relatively straightforward relationship with the grid. The facility would secure utility power, convert and distribute it through a familiar chain of transformers, switchgear, UPS systems, and diesel backup, and then deliver it to the IT load.

That logic is now breaking down under AI conditions.

Onsite generation is becoming a priority because large grid allocations are increasingly hard to secure. Gas engines and gas turbines are the near-term workhorses, Wang said, while solid oxide fuel cells are emerging as another stable onsite option. Small modular reactors remain further out, both for maturity and community-acceptance reasons.

At the same time, onsite generation creates its own balancing challenge. Most generation assets do not follow load instantly or elegantly. That makes batteries and microgrid controls central rather than optional.

“If you have onsite power generation, this will bring a lot of new challenges,” Wang said. “So you need some battery to smooth out the fluctuation between your demand and supply.”

BESS Moves Toward the Center

That is where Wang sees one of the biggest new markets opening.

He pointed to rapidly rising demand for battery energy storage systems, not only for backup but also for grid interaction, peak shaving, and supporting behind-the-meter generation. What once might have been thought of as a supplemental asset is becoming part of the core electrical design.

For 2026, Wang said Omdia expects roughly 15 to 20 GWh of battery systems to be delivered into this market, with order books already pushing the forecast upward.

“The order book is crazy,” he said.

He also pointed to duration creep as part of the story. Two-hour systems were once typical in many modeling assumptions. Now the market is moving toward six- and eight-hour systems in scenarios where operators want greater independence from the grid. Wang cited Google’s reported interest in a 100-hour battery system for one data center as an example of how far the conversation is stretching.

That does not mean every data center is about to become an island. It does mean the industry is beginning to design for optionality, resilience, and economic dispatch in ways that look more like energy infrastructure than legacy mission-critical facilities.

HVDC Moves From Theory Toward Deployment

Perhaps the most technical, and important, portion of Wang’s presentation dealt with high-voltage direct current.

He described a progression from traditional AC distribution to Open Compute-style architectures, then to retrofit 800 VDC or plus/minus 400 VDC systems, and ultimately to native 800 VDC facilities designed from the ground up for future AI racks.

The reason is straightforward. As rack densities rise, DC architectures promise fewer conversion stages, lower losses, less copper, and a cleaner path to delivering very high power to the rack.

“When you think about a future project in making the 800 VDC, you have the freedom to decide or choose what is needed for your future IT,” Wang said. “You don’t have to think about what is IT today. You have to think about what is IT in the next generation and the generation afterwards.”

That is a crucial framing. The AI data center is increasingly being designed not around today’s cluster, but around the next two or three cluster generations. In that environment, a native 800 VDC design begins to look less exotic and more like a hedge against rapid obsolescence.

Wang said Omdia expects the first meaningful shipments of next-generation HVDC-related systems and even solid-state transformer deployments to begin appearing this year in pilot and early commercial settings.

Backup Power Gets Rewritten Too

Wang also argued that the old backup hierarchy is being compressed and rethought.

Historically, the standard formula was clear: UPS for immediate short-duration support, then diesel generators for longer outages. Some architectures added another layer with battery backup units closer to the rack.

Under AI conditions, that stack is starting to change. Supercapacitors, BBUs, BESS systems, and onsite generation are being recombined in different ways depending on architecture and workload density. In the most advanced AI facilities, Wang said, the long-term ideal may shift toward a simpler model built around very fast short-duration ride-through plus large batteries integrated with onsite generation.

“Our ideal expectation for 2030 in the AI data centers, most advanced AI data centers, would be CPU plus BESS,” he said, referring to capacitor-based short-duration support alongside battery energy storage.

UPS, he stressed, is not disappearing. But it may no longer occupy the same default role in every design.

The Constraint Is Still the Same

For all the detail around racks, batteries, and HVDC, the broader takeaway from the Omdia summit was more familiar: the AI buildout remains constrained by power, supply chain limits, and capital discipline.

Galabov made that explicit in his company-by-company investment discussion, arguing that future winners will increasingly be distinguished not just by demand, but by whether they have access to power, the ability to fund aggressive build programs, and business models capable of monetizing all that capacity.

“Do you have access to power?” he asked. “Do you have access to continuous funding? Do you have a business model that you can actually monetize as you continue your investment?”

That is a sharper version of the question hanging over much of the data center sector right now. The AI opportunity is growing. The monetization is getting more tangible. The technology roadmap is moving fast. But none of it matters unless infrastructure can be delivered at the pace the models now demand.

What Omdia argued in this session is that the industry is no longer just trying to add more megawatts. It is redesigning the electrical and physical logic of the data center to keep up.

That is what makes this moment different. The AI boom is no longer simply scaling the data center business. It is beginning to change what a data center is.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with his family, remaining an active musician in his spare time.

You can connect with Matt via LinkedIn or email.

You can connect with Matt via LinkedIn or email.

Sign up for our eNewsletters
Get the latest news and updates