DCF Trends Summit 2025: The Distributed Data Frontier - Edge, Interconnection, and the Future of Digital Infrastructure

The fifth in a series recapping key sessions from the Data Center Frontier Trends Summit 2025 (Aug. 26–28), held Sept. 26, 2025, in Reston, Va.
Dec. 17, 2025
11 min read

Key Highlights

  • AI inference workloads are driving demand for higher power densities and more distributed data center architectures, especially at the edge and regional hubs.
  • Connectivity remains a key constraint, with fiber, conduit, and right-of-way challenges impacting deployment timelines and capacity upgrades.
  • Emerging models for connectivity include hyperscalers controlling end-to-end paths, data centers offering connectivity as a service, and open models enabling independent sourcing.
  • Legacy infrastructure and permitting delays complicate development, prompting a shift toward greenfield sites with ample power and low-latency networks.
  • Early collaboration among utilities, fiber providers, and data center developers is essential to streamline deployment and support future growth.

As AI workloads push data center architecture in opposite directions at once—toward massive centralized campuses on one end and latency-sensitive distributed infrastructure on the other—the industry is being forced to rethink where data lives, how it moves, and who owns the connective tissue in between.

That tension was at the center of “The Distributed Data Frontier: Edge, Interconnection, and the Future of Digital Infrastructure,” a wide-ranging discussion at the 2025 Data Center Frontier Trends Summit, moderated by Scott Bergs, CEO of Dark Fiber and Infrastructure, LLC (DF&I). The panel brought together executives whose businesses sit at different—but increasingly interdependent—points along the distributed infrastructure continuum.

Joining Bergs were Scott Willis, CEO of DartPoints; Bill Severn, President and CEO of 1623 Farnam; Doug Recker, Founder and President of Duos Edge AI; and Jim Buie, CEO of ValorC3 Data Centers. Together, they offered a grounded look at how edge facilities, interconnection hubs, and regional data centers are adapting to higher power densities, AI inference workloads, and mounting connectivity constraints.

From Oversubscription to Density Reality

Several panelists began by grounding the conversation in how dramatically power assumptions have shifted, even outside hyperscale AI campuses.

Severn noted that when 1623 Farnam retrofitted its Omaha facility seven years ago, average cabinet density was still closer to 5 kW, and many customers consumed far less than what they had contracted. “Clients were buying 3 to 5 kW and using 1 or 2,” he said, reflecting a long era of oversubscription models that masked real infrastructure limits.

That era is ending quickly.

At DartPoints, Willis explained that while much of the company’s legacy footprint supports 5–6 kW per rack, new expansions are being designed for a very different future. Enterprise workloads are now commonly landing in the 12–20 kW range, private cloud deployments push closer to 30 kW, and early AI inference customers are already asking for 35 kW per rack. New DartPoints builds are targeting 50–80 kW, with some designs stretching toward 120 kW to stay ahead of demand.

Buie said ValorC3’s current designs center around 20 kW per cabinet, but HPC and inference workloads are already driving customer conversations toward 40–60 kW. Even so, he emphasized that most enterprise environments are not approaching the extreme densities seen in hyperscale AI training facilities.

Recker echoed that view from the edge. Duos Edge AI focuses on modular micro data centers in tier 3 and tier 4 markets, typically designed for 5–10 kW per rack. Certain use cases—particularly healthcare—are pushing higher, with some hospitals requiring 20–30 kW. But Recker was clear that Duos Edge is not chasing 100 kW cabinets. “Our value is connectivity and locality,” he said, not extreme density.

For Severn, legacy building constraints further sharpen that distinction. Housed in a 1970s-era bank building, 1623 Farnam faces physical limits that make hosting ultra-dense AI cabinets impractical. Today, even interconnection nodes that once required 5 kW now demand 8–14 kW due to AI-driven networking gear. Rather than fighting the building, Severn said the facility is doubling down on its role as an interconnection hub, particularly for inference nodes that generate significant cross-connect demand.

AI Inference Pushes Compute Outward

While training remains largely centralized, the panel agreed that AI inference is becoming a powerful driver of distributed infrastructure.

Willis described early-stage inference customers deliberately deploying closer to users to reduce latency and improve network efficiency. Recker pointed to real-world applications—AI-enabled drone monitoring for agriculture, computer vision for regional police departments, and healthcare platforms supporting telemedicine—that benefit more from proximity than raw compute scale.

Hospitals, in particular, emerged as a recurring example. In tier 3 and tier 4 markets, healthcare systems are adopting AI platforms that require higher-density racks than traditional enterprise IT, but still demand local processing for reliability, latency, and regulatory reasons.

Importantly, several panelists pushed back on the idea that inference automatically requires extreme power densities. Many AI workloads, they argued, can be effectively deployed at moderate densities when paired with the right connectivity.

That dynamic plays directly to the strengths of interconnection-focused facilities. Severn noted that a single inference node deployment at 1623 Farnam can generate dozens of cross-connects—35 in one recent case—underscoring how AI is reshaping revenue models even in legacy buildings.

Connectivity as the Defining Constraint

If power density is the visible pressure point, connectivity emerged as the underlying constraint tying the conversation together.

“Data centers are just big hot warehouses without connectivity,” Bergs observed, setting the tone for a discussion that repeatedly returned to fiber, conduit, and right-of-way challenges.

The panel agreed that middle-mile connectivity is one of the industry’s most underappreciated bottlenecks. Hyperscalers increasingly require dedicated conduit and cable systems between campuses—assets that often don’t exist and can take 24 months or more to build. Permitting delays, congested rights-of-way, and understaffed agencies routinely stretch timelines.

Coordination with power utilities adds another layer of complexity. As Bergs noted, transmission and communications infrastructure often fall under different land rights and regulatory frameworks, even when they need to occupy the same corridors.

Severn said 1623 Farnam is actively working with its more than 60 carrier partners to encourage upgrades to 400 Gbps capacity. Carriers unable to meet those requirements risk being sidelined as customers demand higher bandwidth and lower latency paths.

Competing Connectivity Models Take Shape

As inference workloads proliferate, the panel outlined several emerging models for how connectivity is delivered and monetized.

Severn described four primary approaches: hyperscalers that control end-to-end network paths; data centers that offer connectivity as a value-added service; application providers that bundle connectivity with platforms; and open models where customers source connectivity independently.

DartPoints, Willis said, most often sees inference customers bring their own connectivity, particularly when integrating with broader enterprise or cloud networks. Duos Edge, by contrast, aims to own the long-haul connectivity back to core data centers and sell access to customers within its modular facilities.

Buie said ValorC3 sees opportunity for data center operators to own fiber assets outright, including metro Ethernet rings, while still maintaining network neutrality. Severn emphasized that neutrality remains central to 1623 Farnam’s value proposition, with fiber ownership limited to site-to-site connectivity rather than broader carrier competition.

Despite their differences, all agreed that early carrier engagement is now essential. Failing to provision enough conduit, vaults, and meet-me room capacity at the outset can delay go-live dates by months—or strand future growth entirely.

Time to Market and the Legacy Balancing Act

Across business models, time to market surfaced as the most immediate operational challenge.

Permitting delays, long lead times for equipment like switchgear, and unexpected complications with utilities and landowners are stretching development cycles. Several panelists said pre-ordering long-lead equipment has become a necessity rather than a hedge.

Legacy infrastructure complicates matters further. Operators must continue serving existing customers while building for higher-density future workloads, often within the same footprint. Severn likened managing space in a multi-story, multi-hall legacy building to playing Tetris, as clearance requirements and cage layouts create stranded capacity.

Site selection increasingly favors greenfield development, with ample power, campus-style acreage, and low-latency network access. Brownfield sites, while sometimes attractive for speed or location, often impose limits that become liabilities as requirements evolve.

Investor scrutiny adds another layer of pressure. As Buie noted, data centers are often conceived as 100-year buildings, but their electrical and cooling systems may need replacement every 20 years—or sooner. Designing for upgradability has become a prerequisite, not a nice-to-have.

Planning Earlier, Together

In closing, the panel returned to a theme that cut across every topic: the need for earlier, deeper collaboration.

Communication infrastructure, several panelists argued, must be planned alongside power and zoning—not bolted on after the fact. Better coordination between utilities, fiber providers, and data center developers could ease right-of-way conflicts and shorten deployment timelines.

Recker said Duos Edge plans to act on these lessons directly, aggressively pursuing acquisitions of 1–2 MW facilities in tier 2 markets to build out a hub-and-spoke edge network.

The broader takeaway was clear. As AI pushes compute both inward and outward, the future of digital infrastructure will be defined less by individual facilities than by the networks that bind them together. In that environment, edge sites, interconnection hubs, and regional data centers are no longer peripheral—they are foundational.

And the distributed data frontier, the panel made clear, is already taking shape.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sign up for our eNewsletters
Get the latest news and updates