OpenAI and Oracle’s $300B Stargate Deal: Building AI’s National-Scale Infrastructure
Key Highlights
- OpenAI’s $300 billion agreement with Oracle aims to develop 4.5 GW of Stargate data center capacity, emphasizing a power-first, regional expansion strategy in the U.S.
- Oracle’s investments include building over 100 data center regions globally, deploying GPU superclusters capable of scaling to 131,072 NVIDIA GPUs, and expanding sovereign cloud offerings to meet regulatory demands.
- The multicloud partnership with Microsoft and Google enables flexible AI workload deployment, supporting large models and inference tasks across diverse cloud environments.
- Oracle’s aggressive CAPEX plans, projected at $10 billion in 2025, focus on expanding infrastructure, upgrading hardware, and establishing regional hubs to support AI and high-performance computing needs.
- The strategic focus on energy-efficient, environmentally conscious data centers with liquid cooling and on-site power generation aims to meet the growing power demands of frontier AI workloads.
In 2025, OpenAI publicly described Stargate as its long-term AI infrastructure platform, aimed at enabling very large-scale, next-generation data centers. In a recent update, OpenAI indicated that it has entered into a reported $300 billion agreement with Oracle to develop 4.5 gigawatts (GW) of additional Stargate data center capacity in the U.S.
This deal positions Oracle not just as a capacity extender but as a core partner in building the power-hungry infrastructure required for frontier AI workloads. The announcement highlights job creation and alignment with U.S. industrial policy, signaling that AI infrastructure is increasingly viewed as critical national-scale infrastructure.
The Early Signals: Oracle Joins the AI Race
This partnership was foreshadowed in June 2024, when Oracle announced that OpenAI had selected Oracle Cloud Infrastructure (OCI) to extend the Microsoft Azure AI platform, effectively positioning OCI as supplemental capacity behind Azure for OpenAI workloads. In practical terms, Microsoft remained the primary platform, while OCI provided a multicloud extension through Azure’s control plane. Oracle’s announcement explicitly framed it as:
“OpenAI selects Oracle Cloud Infrastructure to extend Microsoft Azure AI platform.”
This set market expectations that OpenAI could tap OCI when Azure required extra capacity, signaling the early operationalization of a cross-cloud strategy for large-scale AI deployments.
Throughout 2024–2025, Oracle and Microsoft expanded Oracle Database@Azure into additional regions, operationalizing what was previously considered a highly unusual Azure–Oracle symmetry and helping normalize cross-cloud deployment patterns for enterprise customers.
Rumors, Reports, and Market Reaction
In recent days, multiple outlets have reported that OpenAI signed a multi-year cloud and compute deal with Oracle valued at roughly $300 billion over five years, often linked to the Stargate timeline. While some reports frame this as potentially the largest cloud contract ever, no binding SEC-grade disclosure has been published. Until formal terms are released, the prudent phrasing is “reported, not fully confirmed.”
Media coverage of these reports has had a pronounced effect on Oracle’s stock price. Headlines highlighted that Oracle Chairman and CTO Larry Ellison briefly joined Elon Musk in the “richest person in the world” conversation, with his stake valued at nearly $400 billion, though these figures fluctuate with market activity and should be treated as approximations.
Press and analyst notes indicate that Oracle’s Remaining Performance Obligations (RPO) surged, with mid-hundreds-of-billions figures being cited as the company’s AI pipeline solidifies. This helps explain Oracle’s historic 26% one-day stock gain, though the exact distribution of this backlog across specific customers has not been disclosed. As always, investor speculation and media reporting influence stock price movements, so caution is warranted when interpreting these figures.
Evolving Microsoft–OpenAI Dynamics
According to Reuters, Microsoft and OpenAI have updated their relationship via a non-binding agreement or memorandum of understanding (MoU). These changes may loosen exclusivity and allow OpenAI to diversify cloud partnerships, a context that makes the reported Oracle collaboration more plausible. These arrangements are still evolving and subject to regulatory and governance review, and discussions have been ongoing for some time, including indications that non-Microsoft vendors were beginning to see investment opportunities linked to Stargate-related deals.
As of June 2024, OpenAI could already run workloads through Azure into OCI, enabling early cross-cloud flexibility. With the recent announcement, OpenAI and Oracle are reportedly partnering to develop 4.5 GW of Stargate data center capacity in the U.S., with a projected investment of around $300 billion, primarily for infrastructure development. This multi-year agreement is expected to start later this decade but has not yet been formally detailed.
Why Oracle, and Why Now?
Oracle Cloud Infrastructure (OCI) has earned a reputation for GPU-dense, high-bandwidth “Supercluster” fabric, optimized for bare-metal performance and low-latency interconnects that scale to very large GPU clusters. In June 2025, Oracle highlighted its GB200 NVL72 systems and OCI Superclusters, capable of scaling up to 131,072 NVIDIA GPUs, making it well-suited for single-job and multi-tenant frontier AI training and inference workloads. For OpenAI, this ability to deploy petaflops-to-exaflops of tightly coupled GPU compute with high-bandwidth interconnects and predictable economics is a decisive advantage.
To compete with hyperscaler giants, Oracle has pursued multicloud operational integration. Oracle Database services have been embedded inside Azure (Oracle Database@Azure) and expanded across Google Cloud regions. This approach positions OCI as an “and” rather than an “or” to incumbent hyperscalers, providing OpenAI—and other large AI customers—a path to burst or federate workloads across clouds without abandoning existing control planes.
Oracle’s Global Cloud Strategy
Oracle’s distributed cloud strategy spans multiple environments: public cloud regions, Government Cloud realms (U.S., U.K., Australia), EU Sovereign Cloud separated logically and physically from commercial realms, and Cloud@Customer on-premises deployments. These sovereignty-aware topologies are particularly valuable for navigating the complex regulatory and data localization requirements across industries and regions.
Oracle has emphasized that its EU Sovereign Cloud is fully operational, with cryptographic and operational isolation from commercial regions—a capability likely to become increasingly important as EU regulations tighten.
Aggressive Data Center Expansion
Oracle is actively building new data centers and campuses, even before the recent Stargate-related announcements. The company reports operating more than 50 public regions, with media briefings suggesting a push past 100 total regions, including specialized realms.
Oracle has also publicly stated an ambitious goal to build more cloud data centers than its competitors combined, which, while ambitious, underscores a strategy of large-scale regional proliferation.
Capital Investment and AI Readiness
Oracle is committing significant CAPEX to expand its data center footprint. Trade publications noted plans in 2024 to spend approximately $10 billion on data center expansion in 2025, and Oracle’s FY25/FY26 guidance anticipates OCI growth exceeding 70% in FY26, with RPO expected to more than double.
These figures imply sustained build-outs despite constrained supply chains. Even before the Stargate discussions, analysts had suggested that Oracle was developing a pipeline of dozens of new data centers specifically to support AI contracts, signaling that AI-first infrastructure has been a strategic priority for the company.
Where Will the Power Come From?
While headlines about a $300 billion data center investment grab investor attention, the reported 4.5 GW of planned OpenAI–Oracle Stargate capacity is what really captures the data center industry’s focus.
Deploying 4.5 GW suggests a U.S. siting strategy across power-rich or power-expandable nodes, with access to high-capacity transmission, industrial-zoned land, and fast-track permitting—all challenges that data center developers are currently navigating. This signals a pivot toward “power-first” development: identify or create sufficient electricity, then scale compute clusters. OpenAI’s public disclosure underlines how cloud infrastructure is increasingly as much about energy as it is about compute.
Currently, there is no “off-the-shelf” 4.5 GW of unclaimed capacity available in the U.S. However, as has been covered industry-wide, nuclear, natural gas, and behind-the-meter solutions are being fast-tracked to meet growing AI demand.
Regional Clustering and Site Strategy
Oracle is expected to continue regional clustering—deploying multiple regions within a state or power market—and expand hybrid sovereign footprints in Europe and the Middle East. With public claims about zettascale superclusters and very large Blackwell deployments, Oracle will need sites where 100–500+ MW phases can be repeated, and where grid congestion can be mitigated via upgrades, on-site generation, or long-lead interconnection agreements negotiated in advance.
Oracle has already incorporated liquid-cooled NVL72 racks in its Supercluster deployments. As model sizes and context windows grow, cold plates and rear-door heat exchangers are likely to become standard, with immersion cooling reserved for specialized SKUs. Committing to solutions with minimal local environmental impact or water demand expands the number of feasible future sites.
Global Expansion and Site Targets
Oracle has not published a comprehensive regional development plan, but it has announced a $2 billion investment in Germany to expand AI and cloud capacity in Frankfurt. The company has also expressed interest—without specific commitments—in dozens of additional regions globally through 2025–2027.
Likely targets include major interconnection hubs (Frankfurt, London, Paris, Amsterdam), U.S. metros with strong utility partnerships and transmission availability, and jurisdictions with accelerated permitting for data center and energy projects. There is currently no public information on discussions with RTOs like PJM that would reveal potential U.S. infrastructure sites.
Power-First Campus Design
The 4.5 GW figure signals that Stargate will follow a power-first campus model. Historically, such campuses combine grid PPAs, on-site generation (gas turbines, fuel cells), nuclear (as SMRs become available), advanced geothermal, and high-capacity transmission upgrades. Oracle has not disclosed the exact mix, but industry precedent shows AI-first campuses typically start with utility commitments and collocated generation, emphasizing long-term, reliable power supply.
What Does the Future Hold?
Multicloud as the Default
Multicloud is no longer optional—it is the standard. The OpenAI–Oracle–Azure triangle legitimizes large-scale, production multicloud, providing a framework to architect cross-cloud data governance and GPU tenancy strategies that treat clouds as pools rather than silos.
Sovereignty Matters
Data sovereignty concerns are now mainstream. Organizations operating in the EU or other regulated sectors must design deployments that segregate workloads by realm, ensuring compliance with EU Sovereign Cloud or GovCloud rules. Operational partners must understand these requirements to manage secure, compliant, and efficient deployments.
Oracle’s Evolution
Oracle has evolved from the “database company playing catch-up in cloud” into a central builder of AI-first infrastructure. Its differentiators—GPU supercluster scale, sovereign cloud, and multicloud coexistence—position the company as more than a cost-efficient alternative. The Stargate partnership elevates Oracle to co-author of the next phase of AI data center development, not merely a supplemental compute option.
A National-Scale AI Utility
If the most optimistic reports prove accurate and Oracle continues to meet energization milestones, its data center footprint could become one of the largest AI-centric estates in the world. Oracle’s role in the AI economy could shift from a “database-centric ISV with a cloud” to a national-scale AI utility, delivering compute, data, and sovereignty in alignment with the priorities of the current administration. The OpenAI–Oracle relationship, which began as Azure-adjacent capacity and is expanding into co-developed, multi-GW infrastructure, offers a blueprint for the future of OCI.
As stated by Oracle Co-founder, Executive Chairman and CTO, Larry Ellison:
The race to build the world’s greatest large language model is on, and it is fueling unlimited demand for Oracle’s Gen2 AI infrastructure, Leaders like OpenAI are choosing OCI because it is the world’s fastest and most cost-effective AI infrastructure.
From Vision to Results
Oracle’s multiyear partnership with OpenAI and its aggressive expansion of AI-first data centers is more than strategy on paper; it’s driving measurable market impact. This can be seen in company’s latest earnings report, which confirms that the Stargate vision, combined with high-bandwidth GPU infrastructure, sovereign cloud capabilities, and global region proliferation, is resonating with customers and investors alike.
Wall Street’s enthusiastic response underscores that Oracle is no longer just a database company exploring cloud; it’s emerging as a central player in the AI infrastructure ecosystem, translating ambitious long-term projects into tangible growth and momentum.
Oracle’s ‘Astonishing’ Quarter Stuns Wall Street, Targeting Cloud Growth and Global Data Center Expansion
Oracle’s FY Q1 2026 earnings report on September 9 — along with its massive cloud backlog — stunned Wall Street with its blow-out Q1 earnings. The market reacted positively to the huge growth in infrastructure revenue and performance obligations (RPO), a measure of future revenue from customer contracts, which indicates significant growth potential and Oracle’s increasing role in AI technology—even as earnings and revenue missed estimates.
After the earnings announcement, Oracle stock soared more than 36%, marking its biggest daily gain since December 1992 and adding more than $250 billion in market value to the company. The company’s stock surge came even as the software giant’s earnings and lower-than-expected revenue.
Leaders reported company’s RPO jumped about 360% in the quarter to $455 billion, indicating its potential growth and demand for its cloud services and infrastructure. As a result, Oracle CEO Safra Catz projects that its GPU‑heavy Oracle Cloud Infrastructure (OCI) business will grow 77% to $18 billion in its current fiscal year (2026) and soar to $144 billion in 2030.
The earnings announcement also made Oracle's Co-Founder, Chairman and CTO Larry Ellison the richest person in the world briefly, with shares of Oracle surging as much as 43%. By the end of the trading day, his wealth increased nearly $90 billion to $383 billion, just shy of Tesla CEO Elon Musk's $384 billion fortune.
Also on the earnings call, Ellison announced that in October at the Oracle AI World event, the company will introduce the Oracle AI Database OCI for customers to use the Large Language Model (LLM) of their choice—including Google’s Gemini, OpenAI’s ChatGPT, xAI’s Grok, etc.—directly on top of the Oracle Database to easily access and analyze all existing database data.
Capital Expenditure Strategy
These astonishing numbers are due in part to Oracle’s strategy to increase its capital expenditures (capex) to support AI demand and its cloud strategy, fill its data centers with the latest chips and servers, and build new data centers. Leaders plan a significant increase in capex for FY2026, increasing their forecast to $35 billion from a previous estimate of $25 billion.
Oracle CEO & Director Safra Catz said the company signed four multibillion-dollar contracts with three customers in Q1, noting significant contracts with “the who's who of AI, including OpenAI, xAI, Meta, NVIDIA, AMD, and many others.” While Catz didn't name those customers, The Wall Street Journal reported that Oracle signed a $300 billion, five-year deal with OpenAI.
The capex spending is targeted and strategic, with primary areas of focus that include:
• Building and expanding data center capacity and cloud regions. Oracle is adding cloud infrastructure regions (multicloud/sovereign regions), expanding its Oracle Cloud Infrastructure (OCT) footprint. For example, it’s increasing capacity in Frankfurt, Germany.
• AI infrastructure—GPUs, networking, storage, etc. A large share of the capex is going into GPU clusters, high‐bandwidth and high‐performance networking, storage that supports large model training/inference. Oracle is entering into big chip procurement deals, such as the one with NVIDIA, to power AI workloads, such as the deal announced in May spend $40 billion on NVIDIA’s higher-performance chips to power OpenAI's new U.S. data center.
• Hardware and system upgrades. Upgrading existing infrastructure to support more demanding workloads: faster interconnects, denser storage, more powerful compute, cooling, and power infrastructure to handle GPU and AI workloads.
• Geographic expansion. Part of the capex is being invested in geographic expansion of cloud infrastructure in Europe and in U.S. data centers.
• Sovereign or regulated cloud infrastructure. There’s also a focus on sovereign cloud capabilities—satisfying regulatory and local data preferences—especially for public and government customers in Europe.
This much spending does create some trade-offs and implications, including the following:
• Free cash flow becomes negative or tight. Because capex is so front-loaded, operating cash flow is still strong, but the heavy investment is pushing free cash flow negative. In FY2025, Oracle had free cash flow around –$0.39 billion.
• Debt and leverage increases. To fund this, Oracle is taking more debt—or at least its net debt is rising at a time of expensive borrowing costs—and making capital allocation decisions balancing investment vs. shareholder returns in the form of dividends and buy-backs.
• High risk goes with high reward. Because building infrastructure is expensive and takes time, Oracle’s leaders risking that demand—especially for AI and machine learning (ML) workloads—will materialize sufficiently to cover these investments. If not, there's a risk of underused capacity.
• Competitive pressures continue. To keep up with AWS, Microsoft, Google, etc., Oracle’s leaders are motivated to spend heavily to stay competitive in region presence, cost, and performance.
Earnings Details
Revenue increased 12% from $13.3 billion a year earlier during the quarter, ending on Aug. 31, according to Oracle’s statement. Net income was about flat at $2.93 billion, or $1.01 per share, compared to $2.93 billion, or $1.03 per share, in the same quarter last year.
Oracle's OCI revenue grew by 54% to $3.3 billion in Q1 FY26, showing increased use of its cloud platform for demanding tasks, including AI.
The company reported negative free cash flow of -$362 million, despite strong operating cash flow.
Key guidance highlights include:
• Q1 total revenue: $14.9 billion, up 12% in USD. Operating income was $4.3 billion. Non-GAAP operating income was $6.2 billion, up 9% year-over-year in USD.
• Gross margins: 29% GAAP, 42% non-GAAP.
• Gross margin: $10.4 billion, or 69.7%.
• RPO: $455 billion, up 359%.
• GAAP (Generally Accepted Accounting Principles) earnings per share: -2% to $1.01, non-GAAP earnings per share up 6% to $1.47.
• Cloud revenue (IaaS plus SaaS): $7.2 billion, up 28% in USD.
• Cloud infrastructure (IaaS) Revenue: $3.3 billion, up 55% in USD.
• Cloud application (SaaS): Revenue $3.8 billion, up 11% in USD.
• Fusion cloud ERP (SaaS) Revenue $1.0 billion, up 17% in USD.
"Multicloud database revenue from Amazon, Google, and Microsoft grew at the incredible rate of 1,529% in Q1," Ellison said. "We expect multicloud revenue to grow substantially every quarter for several years as we deliver another 37 datacenters to our three hyperscaler partners, for a total of 71.”
“It was an astonishing quarter—and demand for Oracle Cloud Infrastructure continues to build,” Catz said. “Over the next few months, we expect to sign up several additional multibillion-dollar customers, and RPO is likely to exceed half-a-trillion dollars.”
Promising Financial Outlook
Catz declared, “Oracle is off to a brilliant start to FY26.” She said the scale of Oracle’s recent RPO growth allows the company to make a large upward revision to the Cloud Infrastructure portion of Oracle’s overall financial plan.
“We expect Oracle Cloud Infrastructure revenue to grow 77% to $18 billion this fiscal year (2026)—and then increase to $32 billion, $73 billion, $114 billion, and $144 billion over the subsequent four years,” Catz said. “Most of the revenue in this 5-year forecast is already booked in our reported RPO. Oracle is off to a brilliant start to FY26.”
Implications for Data Centers
Oracle’s success or failure is important for data centers because it provides the software, infrastructure, and services many organizations need to store, manage, and process massive amounts of data efficiently and securely. Several factors contribute to this:
• Mission-Critical Databases. Oracle Database is one of the most widely used enterprise database platforms in the world, powering mission-critical applications across industries such as data centers, finance, healthcare, manufacturing, and telecommunications. Because so many essential business systems rely on Oracle databases, data centers often are designed to support these workloads.
• High Performance and Scalability. Oracle's engineered systems, such as Oracle Exadata, are designed for exceptional performance and scalability, according to the company, allowing data centers to efficiently manage large-scale transaction processing and analytics. These solutions can scale to meet the demands of growing businesses, so they’re well-suited for enterprises with global operations and massive data volumes.
• Cloud and Hybrid Solutions. OCI provides a flexible cloud and hybrid solution to allow organizations to run workloads on-premises, in a private data center, or in Oracle's public cloud. This supports seamless cloud migration so businesses can optimize their IT infrastructure and transition operations without disruption.
• Automation and Management Tools. These tools, including the Oracle Autonomous Database, are designed to allow data centers to reduce manual administration, improve uptime, and optimize performance. This increased automation can lower operational costs, particularly in large-scale environments, by streamlining tasks and ensuring efficiency.
• Integration with Enterprise Systems. Oracle's software is a core part of many organizations' operations, integrating with other key enterprise applications like ERP, CRM, and HR systems. To ensure compatibility, data centers that support these enterprise customers often need Oracle-certified infrastructure.
With record-breaking RPO growth, expanded partnerships across the AI ecosystem, and aggressive guidance for cloud infrastructure and total revenue, Oracle’s management underscores the company's position as a leader in AI-powered cloud services. Q1’s results and upwardly revised outlook reflect surging demand for both training and inferencing workloads, significant capex commitments, and a broadened moat—ability to maintain a competitive edge over its competitors— driven by proprietary database and networking technologies.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author

David Chernicoff
Theresa Houck
Senior Editor-at-Large
Theresa Houck, Senior Editor-at-Large, is an award-winning journalist with 30+ years of experience. She writes about markets, strategy, and economic trends for EndeavorB2B on topics including healthcare, cybersecurity, AI, manufacturing, industrial automation, energy, data centers, and more. With a master’s degree in communications from the University of Illinois Springfield, she previously served as Executive Editor for four magazines about sheet metal forming and fabricating at the Fabricators & Manufacturers Association, where she also oversaw circulation, marketing, and book publishing. Most recently, she was Executive Editor for The Journal From Rockwell Automation custom publication on industrial automation.
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.