AWS Scales AI Infrastructure Across Data Centers, Power, and Networks

As 2025 comes to a close, AWS is scaling AI infrastructure across data centers, customer sites, and global networks; rolling out hybrid AI Factories, committing $15 billion to new hyperscale capacity in Indiana, and deploying the Fastnet subsea cable to support the next wave of AI compute.
Dec. 23, 2025
11 min read

Key Highlights

  • AWS launched AI Factories to deliver hyperscale AI infrastructure directly into customer data centers, supporting regulated and sovereign environments with managed security and operations.
  • The $15 billion Indiana expansion aims to create large, power-dense facilities optimized for AI training and inference, with a focus on local workforce development and utility partnerships.
  • Fastnet, a new transatlantic subsea cable, will provide over 320 terabits per second of capacity, enhancing global AI workloads with improved resilience, routing diversity, and regional development initiatives.

There has been a significant acceleration in AWS’s infrastructure strategy in late 2025, marked by a set of announcements that together illuminate how the company is positioning itself for the next phase of AI-driven demand.

Across hybrid AI deployment, large-scale data center expansion, and global network investment, AWS is signaling a more vertically integrated approach to delivering AI compute: one that spans customer sites, regional hyperscale campuses, and transoceanic connectivity.

AWS AI Factories: Bringing Hyperscale AI Infrastructure to Customer Sites

In early December, AWS officially launched AWS AI Factories, a new offering designed to bring AWS-managed, hyperscale AI computing directly into enterprise and government-owned data centers. The move reflects a growing recognition that not all AI workloads (particularly in regulated or sovereign environments) can be served exclusively from public cloud regions.

Rather than asking customers to design and build complex AI infrastructure from the ground up in a process that can take years, AWS deploys a fully integrated AI stack on-site.

That stack includes high-performance AI accelerators, spanning the latest NVIDIA GPUs alongside AWS’s own Trainium chips; low-latency networking and storage optimized for large-scale model training and inference; and deep integration with AWS’s AI and machine learning services, including Amazon Bedrock and SageMaker.

Just as importantly, AWS retains responsibility for security, systems management, and ongoing operations, effectively extending its cloud operating model into customer facilities.

In practice, AWS AI Factories function as private AWS environments, purpose-built for organizations with strict data sovereignty, compliance, or residency requirements. Governments, defense contractors, and highly regulated enterprises that cannot rely solely on shared public cloud infrastructure now have a path to access cutting-edge AI performance while keeping sensitive data within their own physical and regulatory boundaries.

From Cloud Service to Infrastructure Partner: Why Embedded AI Is Becoming a Strategic Imperative

The strategic importance of these embedded AI solutions lies in how directly they address the practical constraints now shaping enterprise and public-sector AI adoption.

For organizations with existing data center capacity, AWS AI Factories dramatically shorten the path to deployment by bypassing multi-year procurement, design, and construction cycles, allowing customers to begin training and running models far more quickly than traditional buildouts would permit.

At the same time, the offering marks a meaningful expansion of AWS’s role in the infrastructure stack.

Rather than operating solely as a public cloud provider, AWS is positioning itself as a hybrid infrastructure partner; one capable of delivering private, on-premises AI environments that remain tightly integrated with its public cloud services.

This shift also carries competitive significance, as AWS responds to rival platforms promoting hybrid and edge-based AI deployments, including Google Cloud’s Anthos AI and Microsoft’s Azure Stack–based AI offerings.

Meeting AI Demand Beyond the Public Cloud

For governments and regulated industries, the appeal is even more direct. AI Factories provide a turnkey path to world-class AI compute while preserving local control over data, addressing sovereignty, residency, and compliance requirements that have limited reliance on shared public cloud regions.

Beneath all of this is a deepening alignment with NVIDIA’s GPU ecosystem, reinforcing AWS’s access to, and operational integration with, the hardware platforms driving today’s most demanding AI workloads.

As enterprise and public-sector AI adoption accelerates, the primary bottleneck is shifting away from model availability and toward the physical realities of where and how those models run. AWS is meeting that challenge with a hybrid approach that blends managed cloud expertise with local deployment: a pattern increasingly favored by organizations seeking performance and control without full dependence on centralized cloud infrastructure.

AWS’s Massive Data Center Expansion: $15 Billion More in Indiana

Amazon has announced a $15 billion expansion of AWS data center campuses in Northern Indiana, a buildout expected to add roughly 2.4 gigawatts of capacity and create approximately 1,100 new direct jobs, alongside thousands more across the regional supply chain.

The announcement builds on an earlier $11 billion investment in St. Joseph County and underscores AWS’s continued push to secure large, power-rich sites capable of supporting the next wave of AI-driven demand.

At that scale, the Indiana expansion is obviously designed for more than incremental cloud growth. A multi-gigawatt footprint promises facilities optimized for AI training and inference clusters, where rack densities and power consumption far exceed those of traditional enterprise or cloud workloads.

The project reflects AWS’s expectation that AI compute demand will remain both sustained and highly concentrated, requiring campuses purpose-built for extreme power density and long-duration operation.

Equally significant is the structure of AWS’s engagement with local utilities. The expansion includes a formal framework with NIPSCO under which Amazon will fund and construct necessary grid infrastructure while insulating local ratepayers from added costs.

As data center developments increasingly strain regional power systems, this model, i.e. pairing private capital with utility coordination, has become critical to maintaining reliability while addressing community concerns around equity and cost allocation.

AWS has also emphasized workforce development as a core component of the investment. Planned initiatives include technical training programs spanning fiber installation, data center operations, and cloud systems, aimed at building a durable local talent pipeline that extends beyond Amazon’s own facilities.

In that sense, the Indiana expansion is being positioned not simply as a capacity play, but as a longer-term economic development engine tied to the region’s role in the AI and cloud infrastructure ecosystem.

Why AWS Is Scaling in Indiana

The logic behind AWS’s Indiana expansion becomes clearer when viewed through the lens of AI compute demand.

A multi-gigawatt buildout points to provisioning for large, power-intensive AI training clusters rather than incremental cloud growth; an approach consistent with AWS’s earlier development of Project Rainier, one of the world’s largest AI compute sites, also located in rural Indiana.

As training workloads scale in both size and duration, access to abundant, reliable power has become a primary determinant of site selection. Just as important is the development model AWS is applying. By pairing privately funded grid upgrades with workforce investment and explicit protections for local ratepayers, AWS is advancing a framework that addresses the political and regulatory scrutiny now facing hyperscale data center projects nationwide.

That combination of power infrastructure, talent development, and community engagement is increasingly becoming the archetype for shaping how large AI campuses are permitted and accepted at the local level.

Finally, the expansion reflects a broader policy environment favoring domestic infrastructure investment. Federal and state incentives aimed at strengthening U.S. leadership in advanced computing and digital infrastructure have helped create conditions in which projects of this scale are viable, reinforcing the United States’ role as a global center for AI training capacity.

Fastnet: A Dedicated Transatlantic Subsea Cable for Cloud and AI Traffic

AWS has also revealed Fastnet, a new transatlantic subsea fiber optic cable connecting Maryland in the United States with County Cork, Ireland.

Targeted for completion in 2028, Fastnet is designed to deliver more than 320 terabits per second of capacity, dramatically expanding the bandwidth available for cloud and AI traffic moving between North America and Europe. Beyond raw throughput, the system introduces additional routing diversity across the Atlantic, reducing reliance on congested or single-point transatlantic paths while improving resilience and performance for latency-sensitive workloads.

Far from a standalone networking project, for AWS, Fastnet is a strategic extension of its global infrastructure footprint. As AI workloads increasingly span regions, high-capacity, low-latency fiber has become a foundational requirement, supporting tenets of distributed model training, multi-region redundancy, and globally collaborative development.

By integrating Fastnet directly into its private global backbone, AWS gains end-to-end control over traffic routing and optimization, reducing dependence on public internet paths and improving consistency for large-scale cloud and AI operations.

The project also reflects AWS’s effort to align large infrastructure investments with local economic engagement. The company has announced the creation of Community Benefit Funds in both Maryland and County Cork, positioning the cable not just as a global connectivity asset but as part of a broader commitment to regional development at its landing points.

Subsea connectivity has become a key differentiator among hyperscale cloud providers, and AWS’s move mirrors similar investments by Meta, Google, and Microsoft.

For AI workloads in particular where distributed data processing, replication, and real-time collaboration are increasingly central, transoceanic fiber capacity is now as strategically important as the scale and location of data center campuses themselves.

Broader AWS Trends and Industry Context

AWS’s recent announcements arrive amid an accelerating AI compute arms race, as U.S. hyperscalers move aggressively to secure the power, space, and network capacity required to support next-generation workloads.

Across the industry, the underlying challenges are converging: GPU- and accelerator-driven systems are pushing rack densities well beyond traditional thresholds; AI training clusters are scaling into the gigawatt range; and the resulting environmental and grid impacts are drawing heightened regulatory and community scrutiny.

Against that backdrop, AWS’s approach stands out for its breadth and integration. By simultaneously expanding hyperscale data center capacity, introducing hybrid on-premises AI Factories, and investing in private global networking infrastructure, AWS is building a flexible platform capable of meeting AI demand wherever it emerges.

The result is an infrastructure strategy that spans centralized cloud regions, customer-controlled environments, and transcontinental connectivity, positioning AWS to serve everyone from fast-moving startups to regulated enterprises and sovereign governments as AI reshapes the physical and operational realities of digital infrastructure.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

David Chernicoff

David Chernicoff

David Chernicoff is an experienced technologist and editorial content creator with the ability to see the connections between technology and business while figuring out how to get the most from both and to explain the needs of business to IT and IT to business.

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sign up for our eNewsletters
Get the latest news and updates