Powering the AI Era: Innovations in Data Center Power Supply Design and Infrastructure

A new report explores how AI workloads are transforming data center power architectures—highlighting the rise of high-voltage DC distribution, wide-bandgap semiconductors, and intelligent protection systems like eFuses. Together, these technologies are redefining how operators meet escalating AI power demands with greater efficiency, reliability, and sustainability.
Oct. 10, 2025
7 min read

Key Highlights

  • AI workloads are increasing rack power densities beyond 100 kW, necessitating new power architectures and cooling strategies.
  • High-voltage DC distribution (600–800 V) reduces current, enabling air cooling and lighter infrastructure, addressing physical and efficiency limitations of traditional 48 V systems.
  • Integration of wide-bandgap semiconductors like SiC and GaN in power supplies offers higher efficiency, faster switching, and thermal stability, supporting compact and high-frequency designs.
  • Emerging technologies such as eFuses enable hot swapping, fault detection, and intelligent current sharing, improving operational resilience and reducing downtime.
  • Industry leaders like Microsoft, Google, and Amazon are trialing HVDC and WBG adoption, signaling a shift towards standardized, scalable power solutions for future AI data centers.

Recently, Data Center Frontier sister publication Electronic Design (ED) released an eBook curated by ED Senior Editor James Morra titled In the Age of AI, A New Playbook for Power Supply Design, with a collection of detailed technology articles focused on understanding the nuts and bolts of delivering power to AI-centric data centers.

This compendium explores how the surge in artificial intelligence (AI) workloads is transforming data center power architectures and includes suggestions for addressing the issues.

Breaking the Power Barrier

As GPUs like NVIDIA’s Blackwell B100 and B200 cross the 1,000-watt threshold per chip, rack power densities are soaring beyond 100 kW, and in some projections, approaching 1 MW per rack. This unprecedented demand is exposing the limits of legacy 12-volt and 48-volt architectures, where inefficient conversion stages and high I²R losses drive up both energy waste and cooling load.

Powering the Next Era of AI Infrastructure

As AI data centers scale toward multi-megawatt clusters and rack densities approach one megawatt, traditional power architectures are straining under the load. The next frontier of efficiency lies in rethinking how electricity is distributed, converted, and protected inside the rack.

From high-voltage DC distribution to wide-bandgap semiconductors and intelligent eFuses, a new generation of technologies is reshaping power delivery for AI. The articles in this report drill down into five core themes driving that transformation:

Electronic Fuses (eFuses) for Power Protection

Texas Instruments and others are introducing 48-volt-rated eFuses that integrate current sensing, control, and switching into a single device. These allow hot-swapping of AI servers without dangerous inrush currents, enable intelligent fault detection, and can be paralleled to support rack loads exceeding 100 kW. The result: simplified PCB design, improved reliability, and robust support for AI’s steep and dynamic current requirements.

The Shift from 48 V to 400–800 V High-Voltage DC (HVDC)

Traditional 48-volt power distribution is approaching its physical limits. Delivering 600 kW at 48 V requires roughly 12,500 amps—necessitating bulky, liquid-cooled busbars. By contrast, 800-volt distribution reduces current to about 750 amps, enabling air-cooled operation and lighter, more economical infrastructure. Hyperscalers are piloting “sidecar” power racks that use HVDC distribution to free up server rack space for compute and minimize double-conversion inefficiencies.

Disaggregation of Power and Compute

Vicor and others argue that decoupling compute from power infrastructure—using ±400 V DC distribution and liquid-cooled busbars—enables far denser AI racks, reaching up to 720 PFLOPS per 48U rack. This disaggregated architecture aligns with broader industry trends toward liquid cooling, higher GPU density, and Open Compute Project (OCP) ORv3 high-power rack standards—key enablers of next-generation AI supercomputers.

Wide-Bandgap Semiconductors (SiC and GaN) in Power Supplies

Silicon carbide (SiC) and gallium nitride (GaN) are rapidly supplanting traditional silicon MOSFETs in server power supply units. These wide-bandgap materials deliver higher efficiency, faster switching, and superior thermal stability. SiC enables high-voltage conversion (1,000 V+), while GaN supports compact, high-frequency topologies in the 100–650 V range. Hybrid Si/SiC/GaN PSUs (3–12 kW modules) are already appearing in reference designs from Infineon, Analog Devices, and others.

Strategic Industry Implications

Sustainability and reliability are emerging as key differentiators for these power technologies. With AI data centers projected to consume up to 10% of global electricity by 2030, efficiency gains from HVDC and wide-bandgap semiconductors will directly influence operating costs and ESG performance.

Meanwhile, eFuse-based hot-swapping, predictive fault management, and intelligent current sharing will reduce downtime risks in hyperscale AI clusters. These advances—along with HVDC sidecars and liquid-cooled busbars—will reshape rack layouts, cabling standards, and thermal strategies. Such design implications must now be addressed early in the data center planning phase.

Early adopters—including Microsoft, Meta, Google, and Amazon—are already trialing HVDC and WBG implementations, signaling an impending wave of standardization across hyperscale ecosystems. As hyperscalers converge on these technologies, adoption among smaller AI data centers will become faster and easier.

Why This Matters for Data Center Frontier Readers

For hyperscale operators, colocation providers, and infrastructure investors, this report highlights a fundamental shift underway in data center power delivery. AI workloads are not only driving unprecedented total power demand: they’re redefining how power must be distributed and managed within the rack itself.

Traditional efficiency gains of 5–10% are no longer sufficient, and incremental improvements to legacy architectures amount to stopgaps at best. To meet the escalating power demands of AI-centric data centers, wholesale architectural redesigns are now in motion.

In short, the foundational changes the industry must prepare for include:

  • Transitioning to 400–800 V DC power distribution.

  • Adopting GaN- and SiC-enabled power supply units as standard.

  • Investing in liquid-cooled, high-density racks with disaggregated power infrastructure.

  • Leveraging intelligent eFuse protection for operational resilience and uptime.

Ultimately, the report underscores that power electronics are becoming as strategically important as compute silicon in defining the performance, economics, and sustainability of AI-era data centers.

Looking Ahead: The New Power Frontier

This evolution in power architecture is central to the story Data Center Frontier continues to track across its coverage of AI infrastructure, grid modernization, and energy strategy. From HVDC sidecars and modular power blocks to liquid-cooled server designs and reimagined utility interconnections, the industry is entering an era where electrical and mechanical systems are co-evolving with compute.

For DCF readers, understanding these converging forces is essential: not just to anticipate technology adoption curves, but to navigate where investment, policy, and design are heading in the race to power AI at scale.

 

View the full eBook

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

David Chernicoff

David Chernicoff

David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.
Sign up for the Data Center Frontier Newsletter
Get the latest news and updates.
ID 373881763 © Hello Pic | Dreamstime.com
dreamstime_xxl_373881763
Sponsored
Discover why real-time data is the key to transforming DCIM from static monitoring into a true digital twin for smarter, predictive operations.
CoolIT Systems
Source: CoolIT Systems
Sponsored
In this Q&A, Paul Lee, Director of Professional Services at CoolIT, shares insights into how liquid cooling services enable scalable, reliable and efficient adoption of liquid...