DCF Trends Summit 2025 - AI Is the New Normal: Building the AI Factory for Power, Profit, and Scale

The sixth in a series of articles recapping key sessions from the Data Center Frontier Trends Summit 2025 (Aug. 26–28), held Sept. 27, 2025, in Reston, Va.
Dec. 19, 2025
11 min read

Key Highlights

  • AI workloads are now routinely pushing rack densities into the 44–150 kW range, requiring new infrastructure approaches that support multi-density configurations within the same facility.
  • Location strategies now favor regions with abundant, stranded, or renewable energy sources, often outside traditional cloud hubs, to meet the power demands of AI training and inference.
  • Liquid cooling is becoming a foundational technology for AI data centers, enabling energy recovery and dynamic adaptation to fluctuating workloads, unlike traditional air-cooled systems.
  • Facilities must be designed for rapid reconfiguration and overbuilding, with flexible shells and infrastructure that can evolve alongside hardware cycles and AI innovation.
  • Consumer AI adoption, exemplified by ChatGPT, is accelerating infrastructure needs, making speculative, large-scale AI capacity buildouts more common and financially viable.

By 2025, the data center industry’s long-running evolution toward higher density and greater efficiency has crossed a threshold. Artificial intelligence is no longer an emerging workload to be accommodated at the margins of traditional facilities. It is reshaping the core assumptions of how data centers are designed, financed, sited, and operated.

That reality framed the discussion in “AI Is the New Normal: Building the AI Factory for Power, Profit, and Scale,” a 2025 Data Center Frontier Trends Summit session moderated by long-time DCF Contributing Editor Bill Kleyman, co-founder and CEO of Apolo. Bringing together executives actively building and operating AI-focused infrastructure, the panel explored what it means to move from incremental adaptation to wholesale transformation—and why yesterday’s data center playbook no longer applies.

Across the discussion, one point was unmistakable: the AI factory is not a metaphor. It is a distinct infrastructure archetype, defined by extreme power density, liquid cooling, rapid deployment timelines, and a business model tightly coupled to GPU utilization and AI service economics.

From Incremental Innovation to Structural Break

Ken Patchett, VP of Data Center Infrastructure at Lambda, opened the conversation by putting the industry’s current moment into historical context. For decades, data center “innovation” largely meant squeezing more compute into the same air-cooled paradigm. From the late 1980s through the early 2020s, rack densities climbed from roughly 1 kilowatt to the mid-teens, with most facilities topping out around 16 kilowatts per rack.

That gradual progression masked a deeper fragility. By 2024, AI-driven deployments were pushing sustained rack densities into the 44–50 kilowatt range and beyond—levels that fundamentally strain conventional mechanical and electrical systems. According to Patchett, this is not a matter of upgrading components at the margin. It represents a structural break.

An AI factory, as Patchett described it, is inherently multi-density. It must support traditional enterprise racks alongside 200-kilowatt, 800-kilowatt, and even megawatt-class AI configurations within the same campus or portfolio. That requirement alone breaks the logic of single-purpose facilities designed around narrow operating envelopes.

Crucially, the rise of AI factories does not eliminate the need for conventional data centers. Legacy workloads, cloud services, and enterprise IT will persist for decades. What is changing is that AI infrastructure is additive—and it demands its own rules.

Power, Scale, and the Geography of AI

Wes Cummins, chairman and CEO of Applied Digital, extended that argument from the building level to the campus and regional scale. As AI training and inference workloads grow, he argued, the industry’s definition of a “primary” data center market is being rewritten by power availability.

Applied Digital has focused on large-scale campuses in tertiary markets—locations once considered peripheral, now elevated by access to stranded or underutilized power resources. In some cases, that power comes from wind generation that cannot be efficiently transmitted to population centers. For AI infrastructure, proximity to abundant energy increasingly matters more than proximity to traditional cloud regions.

The pace of change has been dramatic. Where a 100-megawatt build once unfolded over 24 months, Cummins noted that customers now expect delivery in 12 to 14 months. At the same time, demand has leapt from tens of megawatts to 500 or 600 megawatts per campus, with gigawatt-scale discussions becoming commonplace.

This acceleration places enormous pressure on cost structure. Applied Digital’s strategy emphasizes ultra-low PUE, near-zero water consumption, and large land parcels that allow for overbuilt shells and future flexibility. In Cummins’ view, the cost of the building itself is no longer the dominant variable. Energy efficiency and operating cost over time are.

That calculus also shapes risk allocation. Long-term leases, wholesale cost pass-throughs for major infrastructure changes, and aggressive reductions in traditional backup generation costs—particularly diesel—are becoming central to protecting returns in an environment where hardware cycles move far faster than real estate depreciation schedules.

Retrofitting AI Into the Colocation Model

While some AI factories are purpose-built on greenfield campuses, others are emerging inside existing colocation footprints. Kenneth Moreano, president and CEO of Scott Data, described how his company has adapted traditional facilities to support enterprise-grade AI deployments without abandoning the colocation model entirely.

Scott Data recently deployed a full-stack AI environment centered on 800 NVIDIA H100 GPUs, integrating compute, storage, networking, and service delivery into a unified offering. Rather than positioning AI as an isolated product, Moreano emphasized abstraction—shielding enterprise customers from infrastructure complexity while enabling them to move from pilot projects to production-scale AI.

In practical terms, that has meant achieving 70-kilowatt cabinet densities using rear-door heat exchangers, with 48 GPUs per rack, inside a 50,000-square-foot facility supported by a 20-megawatt plant. Moreano framed this as an outcome-driven approach: customers engage at the CIO or CTO level, focused on results, not rack-level engineering decisions.

A defining element of Scott Data’s strategy is vertical integration. AI infrastructure, mechanical and electrical systems, orchestration software, and services are all managed in-house. For regulated industries—government, healthcare, finance, and energy—this full-stack control enables multi-tenant deployments with strong auditability, security, and compliance.

Cooling as the First Design Constraint

If power availability sets the outer boundary of AI factory design, cooling defines its internal architecture. Patrick Pedroso, VP of Solutions Engineering at Equus Compute Solutions, argued that liquid cooling is no longer optional for heavy AI workloads—it is foundational.

Pedroso likened the industry’s transition to liquid cooling to the automotive shift away from air-cooled engines. Rear-door heat exchangers, direct-to-chip systems, and full immersion tanks are not competing concepts so much as points along a continuum, each suited to different workloads and operational models.

Equus is extending these approaches beyond hyperscale data centers into edge environments, including telecom towers traditionally limited to 4 to 4.5 kilowatts of load—more than half of which is often consumed by cooling alone. Liquid cooling, Pedroso noted, opens the door to energy recovery, with as much as 30 to 40 percent of waste heat potentially reused for edge inferencing or adjacent applications.

The implication is broader than efficiency. AI workloads fluctuate, unlike the steady-state loads of traditional enterprise IT. Designing cooling systems that can adapt dynamically to these patterns is becoming as important as peak capacity.

Designing for Change, Not Longevity

Throughout the session, panelists returned to a shared critique of traditional data center finance and design assumptions. Facilities built as 25-year depreciable assets struggle to keep pace with hardware cycles measured in months. NVIDIA’s roadmap alone can introduce major architectural changes every six to nine months.

Patchett argued that AI factories must instead be conceived as assemblies of adaptable elements—structures capable of rapid reconfiguration as air volumes, pressures, temperatures, and power densities evolve. Construction timelines that stretch five years from land entitlement through commissioning are fundamentally misaligned with the pace of AI innovation.

Cummins echoed this point from an operator’s perspective. By overbuilding shells and deploying flexible “tech water loops” capable of supporting air-cooled or direct-to-chip systems, Applied Digital aims to decouple building longevity from infrastructure specificity. In markets with abundant land and low energy costs, that flexibility can be achieved without prohibitive capital premiums.

Consumer AI as the Demand Engine

The session’s strategic outlook widened further with a discussion of AI demand itself. Several panelists pointed to consumer adoption as the true driver of infrastructure urgency. ChatGPT’s rise to a billion daily queries in just two and a half years—a milestone that took Google more than a decade—was cited as evidence that AI has already crossed into mainstream usage.

Historically, transformative technologies—from the internet to the smartphone—gained traction with consumers before enterprises followed. The same pattern is playing out with AI, suggesting that today’s infrastructure buildout is only the opening phase of a much larger cycle.

This dynamic reinforces the case for speculative building. Where once data center developers hesitated to commit capital without pre-leasing, megawatt-scale AI capacity is now often sold out before completion. The risk, in the panelists’ view, lies less in overbuilding than in building the wrong thing.

Building the AI Factory Playbook

By the session’s conclusion, a new set of operating principles had emerged. AI factories must be flexible, multi-density, and power-centric. Liquid cooling must be designed in from day one. Site selection must prioritize energy availability over legacy market hierarchies. And business models must align infrastructure investment with AI service revenue, not just square footage.

Cummins emphasized the importance of assembling teams capable of thinking beyond traditional data center silos. Moreano underscored the value of abstracting complexity for customers while retaining full-stack operational control. Pedroso urged builders to treat cooling as the primary design constraint. Patchett offered the bluntest summary: the old playbook no longer works.

Taken together, the discussion framed AI not as a disruptive workload to be accommodated, but as a forcing function—one that is compelling the data center industry to rethink its assumptions about time, scale, and purpose. In that sense, the session’s title was less a declaration than a diagnosis.

AI is no longer coming. It is already here, and the factories required to support it are redefining the infrastructure landscape in real time.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sign up for our eNewsletters
Get the latest news and updates