Designing the AI Century: 7x24 Exchange Fall ’25 Charts the New Data Center Industrial Stack
At the 7x24 Exchange 2025 Fall Conference in San Antonio (Oct. 19-22), the umbrella theme of “Addressing the Impact of AI” meant connecting boardroom vision to breaker-level reality.
In this roundup article assessing the conference highlights, you'll see how former Google Chief Decision Scientist Cassie Kozyrkov reframed AI as a leadership discipline—where the quality of the question, guardrails, and context determine value. Meanwhile, NVIDIA’s Sean Young showed how “physical AI” and Omniverse digital twins now pre-figure complexity before a single rack is energized.
For his part, reliability veteran Steve Fairfax pressure-tested nuclear timelines and fuel constraints against spiky AI loads, arguing power must be treated like an unforgiving SLO. Separately at the conference, executive experts from Compass Datacenters and Schneider Electric detailed AI-enabled, system-level maintenance designed in from day zero, while Provident Data Centers’ case study panel rewrote site selection around energy, water, and support-space geometry.
The throughline: leadership, physics, and energy-first planning are the differentiators for who ships AI capacity, and who waits in the queue.
Leading Through Complexity: Cassie Kozyrkov Opens 7x24 Exchange with AI-First Leadership Framework
Opening this year's 7x24 Exchange Fall Conference, former Google Chief Decision Scientist Cassie Kozyrkov challenged data center and infrastructure leaders to look far beyond automation or efficiency. Her keynote, “The Future is AI-First: Are You Ready to Lead?” reframed artificial intelligence not as a technical implementation, but as a leadership discipline—one that demands clarity of vision, mastery of context, and a tolerance for complexity.
Kozyrkov, who architected Google’s AI-first transformation, set a distinctly philosophical tone for the conference, urging attendees to “stop thinking about AI adoption, and start thinking about complexity adoption.” Her message: the next era of leadership will hinge on how effectively organizations navigate the messy, probabilistic, and often paradoxical world of AI systems.
From Optimization to Intelligence: A Brief History of AI’s Maturation
Kozyrkov began with a brisk history lesson—reminding the audience that the term artificial intelligence dates back to 1955, when the field was rooted in optimization mathematics and theoretical automation of human behavior.
For decades, she noted, AI remained little more than an idea—held back by a lack of data and processing power. The turning point came when computing capacity and datasets scaled together, giving rise to the data-driven machine learning revolution that now underpins generative AI models—and, by extension, the data centers that power them.
Her framing distinguished AI from traditional software: while classical code executes explicit instructions, AI “learns from examples,” deriving its own opaque set of rules. That shift—from deterministic programming to statistical patterning—defines the modern complexity frontier for both engineers and executives.
The Reliability Paradox: Guardrails for the Genies We Create
As AI performance improves, Kozyrkov warned, the danger of overconfidence grows. The so-called AI reliability paradox—where the most accurate systems are also the most tempting to trust blindly—demands what she called “safety nets built as if the worst will happen.”
Guardrails, testing regimes, and transparency frameworks, she said, are not afterthoughts—they are the leadership infrastructure of the AI age.
Her analogy of the “genie and the unskilled wisher” resonated throughout the audience: the risk, she suggested, lies not in the AI’s capability but in our ability to formulate precise, bounded requests.
Generative AI and the New Economics of Advice
Kozyrkov also unpacked the linguistic revolution underlying tools like ChatGPT, Claude, and Gemini. By automating language—the universal medium of human collaboration—generative AI transforms the economics of decision-making itself. “Advice has become cheap,” she said. “Judgment is now expensive.”
That inversion creates what she termed the new economics of advice: a shift from valuing answers to valuing the quality of questions and the contextual framing behind them. Her practical rule—“context is currency”—drew nods from operators familiar with the difference between useful analytics and meaningless dashboards.
She cited recent surveys showing that nearly 40% of workers received unusable AI-generated content (“work slop”) in the past month, costing recipients hours of lost time. The remedy, she argued, is not rejection but discipline and curation—leadership that rewards clarity, verification, and continuous learning.
AI Leadership as Organizational Muscle
The throughline of Kozyrkov’s keynote was unmistakable: AI is no longer an IT project. It’s a test of organizational cognition.
“Most companies fail to derive value from AI,” she observed, pointing to studies showing that 95% of generative AI deployments produce no measurable return. The problem isn’t the technology—it’s leadership passivity. When executives delegate AI to technical teams rather than steering its direction, value creation stalls.
Instead, she urged leaders to cultivate “chimeric talent”—workforces that blend human adaptability with machine fluency. In this emerging model, every employee becomes an experimenter, using AI as a mirror for their own learning style. “The best way to learn AI,” Kozyrkov said, “is to use AI itself.”
From Data to Vision: The Question Becomes the Asset
Her closing message distilled the AI era’s core paradox into one sentence:
“As answers become cheap, the question becomes the asset.”
For the data center community, that insight lands close to home. As Kozyrkov noted, the coming generation of AI agents—language-driven systems that combine multimodality, world models, and tool use—will rely on ever-larger volumes of video, imagery, and causal data. Supporting that shift will mean more data centers, more energy, and greater architectural complexity.
But success, she said, will not belong to those who simply scale compute. It will belong to those who can ask—and operationalize—the right questions.
Designing the AI Factory: NVIDIA’s Sean Young on Digital Twins, Physics-Informed AI, and the New Industrial Stack
If Cassie Kozyrkov’s keynote reframed AI as a leadership discipline, NVIDIA’s Sean Young brought it back down to the factory floor—literally. In his session “Harnessing AI and Digital Twins for Data Center ‘AI Factories,’” Young, NVIDIA’s Director of AEC, Geospatial, and AI Solutions, charted how artificial intelligence is reshaping every phase of data center creation: from the first line drawn in CAD to real-time operations within facilities designed to train and run AI itself.
It was a presentation dense with engineering specificity and visual demonstration, but the underlying narrative was clear: AI is becoming the design language, the construction foreman, and the operational brain of modern infrastructure.
From Text to Geometry: The Rise of Agentic AI in AEC
Young opened by describing how diffusion models—the same class of generative AI that powers image creation—are being embedded into CAD and BIM tools like SketchUp Diffusion, allowing designers to move from rough sketches to photorealistic concepts in minutes. Engineers, he stressed, remain “essential,” not replaced but augmented by AI that can now translate human intent into parametric geometry.
Through the Model Context Protocol (MCP), NVIDIA is integrating large models directly with professional design software like Rhino, enabling users to “type geometry into existence.” That capability, he said, will soon evolve into agentic AI—self-prompting systems that can interpret RFPs, generate full building geometries, and integrate structural physics checks before a human ever reviews a drawing.
Young described this as outcome-based design: feed an AI agent your goals—square footage, power budget, target PUE—and it returns an optimized structure, grounded in physics simulations that prevent finite element errors before construction begins.
Safety, Vision, and the Era of “Physical AI”
Moving from design to the jobsite, Young introduced NVIDIA’s Metropolis platform—a vision AI stack already monitoring hundreds of cameras across active construction sites. Built on NVIDIA’s Cosmos “world model”, the system understands physical laws well enough to predict collisions, stop crane movements, or trigger emergency alerts autonomously.
This, he said, is the first wave of what NVIDIA calls “physical AI”—neural networks trained not on text or images, but on physics-informed data, enabling predictive modeling for domains like computational fluid dynamics, earthquake resilience, and flood analysis.
In industries “constrained by the laws of physics,” Young noted—MEP, water, construction, and power—AI can only advance as far as the fidelity of its physics grounding. With its Physics NEMO and PINN (physics-informed neural network) frameworks, NVIDIA now runs simulations once requiring supercomputers on a single GPU, collapsing timelines from weeks to real-time predictive analytics.
Digital Twins as the New Operating System for Infrastructure
The heart of Young’s presentation centered on digital twins—real-time, physics-based virtual models that simulate everything from chip production to weather patterns. NVIDIA, he revealed, now maintains twins of its GPUs, its manufacturing plants, and even the Earth itself, through its Earth-2 platform on Omniverse, used for street-level weather and flood prediction.
He walked attendees through the workflow of a digital twin: integrating 3D geometry from Revit, SolidWorks, and Creo, geospatial data from Bentley’s Cesium, and component data from MEP vendors into a unified USD (Universal Scene Description) format. Once normalized in Omniverse, the twin simulates light, motion, airflow, and mass—enabling “what if” experimentation for design, construction sequencing, and operational control.
The implications for data centers were direct. In a live demo, Young showed how a digital twin can act as a 3D operational dashboard, modeling AI training and inference loads, thermal and water-flow dynamics, and rack-level energy use. Integrated with partners like Cadence, Etap, Vertiv, and Schneider Electric, the platform allows operators to visualize heat maps, tweak cooling parameters, and project the effects of different hardware mixes or workloads—all before a single server spins up.
“You can’t build an AI factory without a twin,” Young said flatly. “You have to simulate complexity before you deploy it.”
Energy Efficiency and the “Watts per Token” Economy
As the conversation turned toward data centers themselves, Young outlined how NVIDIA’s Grace-Blackwell Superchip—which unites GPU and CPU on a single die—advances not just performance but energy efficiency, eliminating the overhead between compute components. Over the past decade, he said, NVIDIA has reduced its own energy footprint by the equivalent of 100,000 tons, emphasizing that “watts per token” is now the ultimate metric of AI efficiency.
Scaling this compute demands equally sophisticated infrastructure. Through NVLink spines, 5,000 miles of copper cabling, and custom networking silicon from its Mellanox acquisition, NVIDIA connects GPUs into massive shared-memory systems that function as “one colossal GPU.”
Yet Young was careful to underscore a point relevant to every engineer and operator in attendance: NVIDIA doesn’t build data centers—it collaborates to enable them. Power systems, chillers, CDUs, and water loops, he said, depend on a global ecosystem of MEP and construction partners.
“The opportunity isn’t just data centers,” he noted. “It’s data centers plus power plus water.” Increasingly, he added, AI campuses are being co-located with water treatment facilities, merging industrial design and environmental engineering.
The Fourth Industrial Revolution: From Digital Twins to Humanoid Robotics
Young closed with a sweeping historical analogy—drawing a line from the steam engine to electrification to today’s AI-driven fourth industrial revolution. In his telling, data centers have become the AI factories of this era—sites that don’t merely process data but produce intelligence.
He also forecast the rise of humanoid robotics, citing NVIDIA’s focus on human-form robots as a practical choice: they fit existing built environments and can leverage abundant human training data. “We’re training machines to work where humans already can,” he said. “That’s the fastest path to scale.”
The keynote ended with an open invitation to collaborate—through shared digital twins, joint reference designs, and industry partnerships across engineering, architecture, and operations. In NVIDIA’s view, the next generation of “AI factories” will be built not just with GPUs and racks, but with shared simulation environments linking every discipline in the data center lifecycle.
SMRs and the AI Power Gap: Steve Fairfax Separates Promise from Physics
If NVIDIA’s Sean Young made the case for AI factories, Steve Fairfax offered a sobering counterweight: even the smartest factories can’t run without power—and not just any power, but constant, high-availability, clean generation at a scale utilities are increasingly struggling to deliver.
In his keynote “Small Modular Reactors for Data Centers,” Fairfax, president of Oresme and one of the data center industry’s most seasoned voices on reliability, walked through the long arc from nuclear fusion research to today’s resurgent interest in fission at modular scale. His presentation blended nuclear engineering history with pragmatic counsel for AI-era infrastructure leaders: SMRs are promising, but their road to reality is paved with physics, fuel, and policy—not PowerPoint.
From Fusion Research to Data Center Reliability
Fairfax began with his own story—a career that bridges nuclear reliability and data center engineering. As a young physicist and electrical engineer at MIT, he helped build the Alcator C-MOD fusion reactor, a 400-megawatt research facility that heated plasma to 100 million degrees with 3 million amps of current. The magnet system alone drew 265,000 amps at 1,400 volts, producing forces measured in millions of pounds. It was an extreme experiment in controlled power, and one that shaped his later philosophy: design for failure, test for truth, and assume nothing lasts forever.
When the U.S. cooled on fusion power in the 1990s, Fairfax applied nuclear reliability methods to data center systems—quantifying uptime and redundancy with the same math used for reactor safety. By 1994, he was consulting for hyperscale pioneers still calling 10 MW “monstrous.” Today’s 400 MW campuses, he noted, are beginning to look a lot more like reactors in their energy intensity—and increasingly, in their regulatory scrutiny.
Defining the Small Modular Reactor
Fairfax defined SMRs as 30–300 MW reactors, built in factories, shipped to site, and deployed in “packs” that share controls and fuel logistics. They promise affordable growth and repeatable quality—and they can theoretically scale with the digital load curve.
Globally, roughly 200 SMR designs are active, he said, with the U.S. the most engaged. But despite years of investment, no NRC combined construction-and-operating licenses have been granted for SMRs to date. Most current construction activity is non-nuclear—switchyards, turbine halls, or balance-of-plant work. Budgets have crept upward, eroding early cost optimism.
The picture isn’t all bleak. Multiple executive orders are pushing agencies to fast-track test reactors and new licensing frameworks. The ADVANCE Act aims to streamline microreactor permits, and private operators like Google and Kairos Power have announced plans for colocated reactors by 2035. Yet as Fairfax noted, “ambition is not the same as authorization.”
From Early Reactors to Regulatory Reboots
Tracing the lineage, Fairfax reminded attendees that early commercial U.S. reactors—Dresden Unit 1 at 192 MW, Yankee Rowe at 185 MW—would technically qualify as SMRs today. Each ran for decades, but both shut down for economic reasons, not technical failure, after safety rules tightened in the post–Three Mile Island era.
The 1990s brought new NRC rules allowing for advanced, passively safe designs, and the 2000s saw the rise of NuScale, spun out of Oregon State University with DOE backing. After beginning its NRC process in 2008, NuScale achieved design certification in 2023—15 years later—only to see its first project canceled over cost escalation. It immediately reapplied for a 77-MW version, seeking scale economies.
Fairfax placed this history in context: “We’ve been here before. We’ve built small, we’ve built safe, but we’ve never built cheap.”
Fuel, Enrichment, and the HALEU Bottleneck
Nuclear’s power density remains unmatched—100,000 to 5 million times greater than fossil fuels. But fuel logistics are the weak link in the SMR value chain. Most current reactors use 3–5% enriched uranium; many SMR concepts rely on HALEU (10–20%), allowing smaller cores and longer cycles.
The catch? The U.S. has only one commercial enrichment facility producing about one-third of domestic fuel needs—and it’s foreign-owned. A startup, Centrus Energy, operates a pilot plant producing 10–20 tons per year of HALEU—barely enough for a single modest fleet. Meanwhile, 27% of 2023’s enriched uranium came from Russia, with exemptions carved out of sanctions.
That dependence, Fairfax warned, is a strategic and political vulnerability that could “derail the SMR story before it leaves the station.”
Executive Orders and the Return of Nuclear Industrial Policy
Fairfax dissected recent presidential executive orders seeking to “reform” the NRC, fund DOE reactor construction, and restart domestic fuel recycling. The directives target everything from reducing NRC staffing to accelerating licensing, to building 10 new gigawatt-class reactors by 2030.
He acknowledged the urgency but cautioned against “policy whiplash.” The NRC was founded in 1974 to separate promotion from regulation, ensuring public safety remained its sole mission. Reintroducing a dual mandate to promote nuclear energy risks undermining credibility—and triggering lawsuits that slow progress further.
The fourth order, notably, calls for advanced reactors to power AI data centers at DOE sites, allocating a mere 20 tons of HALEU to seed the effort. “That’s a rounding error in annual need,” Fairfax said, underscoring the scale gap between aspiration and material reality.
SMRs and Data Centers: Fit or Fantasy?
The prospect of colocating data centers with SMRs is compelling: 24/7 baseload power, on-site reliability, and a decarbonization narrative investors love. But Fairfax’s engineering math told a harder truth:
- Safety: SMRs aim for core-damage frequencies of 1 in 1–100 million reactor-years. Even so, a fleet of 1,000 SMRs operating for a decade equates to a 1% aggregate chance of core damage. Public perception, amplified by social media, would magnify even a minor incident.
- Reliability: U.S. reactors achieve ~90% capacity factors, but only under specialized operators. “Running a reactor is not a data center company’s side hustle,” he quipped.
- Economics: High CAPEX and long lead times mean nuclear favors steady, full-load operation—the opposite of AI’s spiky, ramping demand curves.
- Grid Integration: FERC has rejected attempts to circumvent utility regulation through direct PPAs with nuclear plants. NRC mandates off-site power redundancy, preventing fully behind-the-meter operation.
For AI workloads swinging tens or hundreds of megawatts in milliseconds, SMRs are simply too inertial. Fairfax suggested hybrid architectures—pairing nuclear baseload with gas turbines, battery storage, or synthetic fuels—to handle ramping without violating reactor stability or licensing.
Manufacturing Reality: Factories Need Orders
Fairfax emphasized that the “M” in SMR—modular—implies factory production. Building one-off units misses the economic point. To drive costs down, the industry needs a Boeing-style order pipeline spanning 4–5 years, complete with deposits and guaranteed offtake. “Right now,” he said, “no one’s order book looks like that.”
The Russian Benchmark—and the Global Gap
He pointed to Russia’s working SMRs as proof of concept: 55-MW floating units serving Arctic towns and icebreakers, operating between 20–100% load with 6% per minute ramp rates, six-year refueling intervals, and a 60-year design life. The difference, he said, is context: those SMRs serve isolated, captive loads—not open markets with AI volatility and public scrutiny.
Cautious Optimism, Rational Patience
Audience Q&A underscored the mood: cautious optimism, tempered by realism. SMRs might eventually replace aging large reactors or serve remote or defense applications before they meaningfully reach commercial data centers. Fairfax predicted decades of development, with “successes and failures, accidents and lawsuits” along the way—just like every other chapter in nuclear history.
Still, he left the door open for incremental collaboration: nuclear engineers, hyperscale planners, and MEP contractors learning each other’s languages now, to shorten the curve later.
“If you want nuclear to power AI,” he concluded, “treat it like the most unforgiving service-level objective you’ll ever design. Engineer it, staff it, and license it like lives depend on it—because they do.”
For an industry staring down multi-gigawatt power queues, Fairfax’s message landed as both caution and call to maturity. AI may demand revolutionary energy, but delivering it will still take generational patience.
Designing for AI-Enabled Maintenance: How Compass and Schneider Electric Are Re-Engineering Uptime
Following Steve Fairfax’s nuclear-scale perspective on powering AI, the 7x24 Exchange conversation shifted back inside the facility—to how operators keep these increasingly complex systems running. In “Designing for AI-Enabled Maintenance Strategy,” Nancy Novak, Chief Innovation Officer at Compass Datacenters, and Wendi Runyon, Vice President of Global Services Incubation at Schneider Electric, explored how artificial intelligence, digital commissioning, and prefabrication are converging to reinvent the service model for hyperscale operations.
Their thesis was simple but urgent: the next reliability revolution won’t come from adding redundancy—it will come from designing for serviceability and embedding AI into the maintenance lifecycle from day zero.
The Perfect Storm: Growth, Labor, and the Limits of Old Habits
Both speakers framed the challenge in stark numbers. Global data-center capacity is expected to double within five years, driven by AI, electrification, and digital demand. Yet the supporting workforce—especially in mechanical, electrical, and commissioning trades—has fallen dangerously behind. Nearly one million skilled positions remain unfilled worldwide.
At the same time, climate volatility, grid instability, and accelerated construction schedules are creating what Novak called “a perfect storm of growth and fragility.” The traditional break-fix and warranty-expiry approach to maintenance, Runyon added, “simply can’t scale to the tempo of AI.”
The answer, they argued, is to bake proactive asset management into design, not bolt it on after turnover.
From Reactive to Predictive: AI as the New Maintenance Layer
Runyon outlined the pivot from calendar-based maintenance to AI-driven, condition-based care. Instead of scheduled truck rolls and fixed intervals, sensors feed continuous telemetry into predictive models that recommend interventions only when performance metrics deviate.
The catch is data. Today, 96 percent of construction and facility data is unstructured or unused. AI can change that—but only if the industry federates its data and develops physics-based models trained on reliable sensor history.
Runyon described AI not as disruptive, but as a “bolt-on collaborator” that learns from the existing environment without rewriting operational rules—akin to a car that studies its driver’s habits to anticipate maintenance.
Thinking in Systems, Not Assets
Novak pressed for a system-level mindset, arguing that maintenance strategies must shift from monitoring isolated components to understanding cause-and-effect across subsystems—the way aviation and automotive industries already do.
“The data center is a living system,” she said. “You don’t maintain a chiller or a UPS in isolation—you maintain how they interact.” Developing analytics at the system level enables true predictive capability and moves operators toward a “built-to-last, not built-to-replace” philosophy.
Designing for Circularity and Serviceability
A key theme was designing for the full lifecycle, from serviceability to sustainability. Embedding sensors and network connectivity during construction costs less than 1 percent of total equipment cost, Novak noted, yet dramatically improves total cost of ownership by enabling continuous monitoring and remote diagnostics.
Designing for circularity—the ability to reuse or repurpose components—also mitigates supply-chain volatility and supports sustainability goals. “Future-proofing is no longer an upgrade strategy—it’s a design mandate,” Runyon said.
The Prefab and Digital Commissioning Advantage
Compass’s “Fastest to Ready” model illustrates what this shift looks like in practice. By manufacturing more than 75 percent of components off-site, the company brings precision, safety, and repeatability to projects. Off-site assembly of data-hall modules and equipment yards reduces on-site labor exposure, improves quality, and allows facilities to reach dry-in within six weeks.
Runyon explained how factory-based digital commissioning now extends this model: sensors, control systems, and building-management integrations are all tested before shipment. The result is “risk retired before ribbon cutting”—and data flowing from day one for AI training, simulation, and remote troubleshooting.
The results are quantifiable:
- 20 percent faster time-to-ready for new facilities.
- 20 percent lower service-contract cost versus calendar-based models.
- 40 percent fewer unnecessary interventions over five years.
Workforce, Well-Being, and the Human-AI Partnership
Both speakers emphasized that predictive maintenance doesn’t eliminate people—it elevates them. Reducing reactive site visits means fewer overnight trips and hazardous conditions for field technicians. That translates to lower attrition and a more sustainable career path for the skilled trades that anchor the industry.
To build that workforce, Compass and Schneider are co-authoring curricula with Southern Methodist University, Jason Learning, and vocational programs like the new Red Oak Technical School in Texas, introducing data-center engineering concepts to students as early as middle school.
They’re also deploying extended-reality (XR) tools for training and support—allowing technicians to practice high-voltage procedures virtually, and enabling remote experts to guide on-site crews in real time. Robotics, meanwhile, are beginning to handle simple, repetitive tasks that free humans for higher-value diagnostics.
Partnering for Power Quality and Grid Integration
On the power side, Schneider is advancing behind-the-meter analytics to predict power-quality issues for AI workloads. By collecting data from every node—from grid to plug—its systems can smooth fluctuations that might otherwise trip cooling loops or degrade GPU performance. Runyon noted that well-instrumented campuses could even support the grid, curtailing or exporting power during peaks to improve regional stability.
Collaboration as the Only Path Forward
Novak closed on a cultural challenge: “We can’t treat partners as vendors anymore.” Building an AI-ready maintenance ecosystem requires long-term collaboration between designers, builders, operators, and service firms. True transformation, she said, demands a five-year runway, not quarterly procurement cycles.
Runyon’s final reminder encapsulated the tone of the session:
“The slowest you’re ever going to go is today.”
AI, she stressed, is not replacing the data-center workforce—it’s extending its reach. Those who learn to harness it now will define the standard for uptime, safety, and sustainability in the decade ahead.
Case Study from the Front Lines: How Provident Is Rewriting Site Selection Around Power, Water, and AI Density
If Kozyrkov set the leadership frame, Young mapped the digital twin toolchain, and Fairfax stress-tested nuclear timelines, Provident Data Centers brought it all back to dirt-level reality: where do you actually put an AI campus, how do you power it, and what does that decision do to your site plan?
In “Case Study for the Future: How Provident Data Centers Strategizes Alternative Power to Meet AI Demand,” Denitza Arguirova, PhD (Provident), Chris Hastings, PE (Vanderweil), and Dutch Wickes, AIA (CI Mission Critical) laid out a playbook for energy-first site development in a world where transmission queues are jammed, densities have tripled, and water is the second gate after megawatts.
Energy-First is the New Site Selection
From the talk’s headline: traditional “find-the-substation” diligence is obsolete. Power scarcity and lead times now force an energy-first posture that evaluates all plausible sources in parallel—utility upgrades and on-site generation (gas turbines, recips, fuel cells, batteries, synchronous condensers), plus what each implies for land, water, and community acceptance.
That validation stack runs in three lanes:
- Component capacity (transmission/gas line size, pressures, interconnect topology);
- Power flexibility (ability to swing among sources and maintain system balance from rack to grid), and:
- Grid reliability (TSO frameworks, backup paths, emergency schemes).
This is also now a tooling problem: providers are leaning on data-driven site screens (e.g., LandGate-style datasets for ERCOT/MISO) that estimate upgrade costs, available capacity, and scenario outcomes to compress decisions.
Finally, a practical twist: Land strategy itself is changing. Heavy deposits and tenant-signed power contracts mean leases are often favored over fee purchase to preserve flexibility and de-risk long queues. Teams are planning decommissioning from day one.
Demand Has Gone Multi-Gigawatt—And the Grid Feels It
The panel sized the moment bluntly: ~12,000 active interconnection requests; utilities simultaneously wrestling with renewables integration, electrification (EVs, heat pumps), and aging T&D. Filtration is getting expensive—$6M+ deposits (Oncor) and $7M+ (Georgia Power) just to keep a ticket in line, plus financial-credibility vetting—yet demand still blows through the sieve.
AI’s bite of U.S. load is projected to rise from ~2.5% to ~7.5% by 2035. Inside the fence, density has more than tripled in a couple of years; the “mega-campus” expectation is shifting from phased buildings to simultaneous turn-up.
On-Site Generation: Bridge, Backbone, or Both?
According to the panel, teams are increasingly “chasing the fuel”—siting near high-pressure gas corridors and designing on-site power as either bridge (to utility upgrades) or permanent. Interconnection choices split into parallel (no export) vs. export (sell surplus to grid). Technology menus include gas turbines, recips, fuel cells—often dual-fuel with liquid backup.
The costs are real: LTSA can rival engine CAPEX over life. But staged development can pencil—simple-cycle for speed, then step up to combined-cycle (HRSG + steam turbine) to push, say, 120 MW → 200 MW. Note the trade: Combined cycle boosts output and efficiency but nearly doubles water use vs. simple cycle. Mitigations include air-cooled condensers, and the team spotlighted thermally driven cooling (absorption chillers on turbine/recip exhaust) to claw back PUE.
Finally, a forecast to watch: By 2030, roughly 40% of data centers could rely on primary on-site generation. That elevates water to the co-gate with power—sometimes driving desalination, treatment, or large storage on campus.
The New Campus Geometry: Support Space Swallows the Site
The panel noted how ten years ago, a balanced plan was ~40% data hall / 60% support. Today, support can hit ~70%—a function of higher voltage yards, switchgear, gas handling, water plants, batteries, and bigger mechanical systems. For today's data center builds, expect:
- Massive duct banks stitching 10–18+ buildings,
- PV fields (rule-of-thumb 4–5 acres/MW) typically powering admin/aux loads,
- Battery blocks, carbon capture pilots, LNG storage, fire pumps, logistics and security compounds.
Also, thermal is now mixed-mode by default: With 20–30 kW/cabinet (rows hitting 100–200 kW or more), designs skew ~80% liquid / 20% air. Equipment placement is changing (UPS out, other gear in), and PEMBs push more heavy kit to grade, expanding the yard.
The bottom line? Parcels get larger; if not, planning must get surgical. Early, integrated MEP and civil involvement is no longer best practice—it’s survival.
Community, Regulators, and the Social License to Scale
The panel stressed education as entitlement: Before a shovel hits the ground, align city officials, regulators, utilities, and communities on what an AI campus actually needs—aesthetics, noise/vibration, water, land use, energy flows—and how mitigations will be enforced. Partnerships with utilities to co-plan upgrades reduce surprises and can unlock shared infrastructure.
Not Every Future Is a Gigawatt: The Infill Opportunity
While 600-MW-and-up gets the headlines, the panel saw sustained demand for 30–75 MW infill projects near load. Different utility processes (distribution vs. large-load) can shorten timelines. With ~25% of demand skewing to extreme AI density, there’s still a large market for 50–75 MW high-quality, nearer-term sites—if power, water, and cooling are credibly addressed.
Operator Takeaways: A Playbook for 2026 Builds
Per the Provident Data Centers panel, a checklist for such a playbook might include the following bullet points:
- Lead with energy. Vet utility, gas, and on-site options together; structure for flex among sources.
- Model water early. Power choices ripple into water plants, storage, and discharge; design mitigations up front.
- Design the yard first. Substations, gas, HRSGs, batteries, treatment, and logistics now set the footprint; halls follow.
- Stage wisely. Start simple-cycle for schedule; convert to combined-cycle for efficiency and output when ready.
- Engineer vibration & noise in schematic. Retrofits are costly; server sensitivity and neighbors won’t wait.
- Use data-driven screens. Capacity, upgrade-cost, and scenario tools compress diligence and clarify tradeoffs.
- Structure land & contracts for uncertainty. Leases + power contracts preserve optionality amid queue risk; plan decommissioning on day one.
- Invest in the social license. Early, transparent community engagement reduces later friction and timeline drift.
Closing the Loop: A Cohesive AI-Era Build Order
Taken together, the 7x24 2025 Fall Conference highlights recounted above depict a coherent story for the AI era:
- Lead with clarity (Kozyrkov): The question—and guardrails—come first.
- Design-in simulation (NVIDIA/Young): Twin it before you build it.
- Treat power like an SLO (Fairfax): Physics, fuel, and policy set the pace.
- Engineer serviceability (Compass/Schneider): AI-enabled maintenance from day zero.
- Plan campuses around energy (Provident/Vanderweil/CI): Power, water, and support space now shape the site—and the schedule.
The result isn’t just bigger data centers: it’s a new industrial stack where leadership discipline, physics-informed design, and energy-first planning decide who delivers AI capacity on time—and who waits in the queue.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.








