At the Crossroads of AI and the Edge: Inside 1623 Farnam’s Rising Role as a Midwest Interconnection Powerhouse

As AI workloads surge and interconnection becomes the new battleground for digital infrastructure, 1623 Farnam is emerging as a pivotal Midwest aggregation hub. In this week’s DCF Show podcast, CEO Bill Severn explains how the Omaha facility is reshaping edge economics, scaling fiber capacity, and re-engineering for the early wave of AI deployments.
Nov. 25, 2025
9 min read

Key Highlights

  • Farnam's definition of the edge centers on content, networks, and user engagement, moving beyond traditional cell tower concepts.
  • The facility hosts over 40 broadband providers and 60 carriers, making it a critical aggregation point for regional and hyperscale traffic.
  • Significant fiber expansions driven by AI and multi-cloud demands are enhancing Farnam's capacity and regional importance.
  • The building's retrofit includes sustainable cooling systems and water reuse, achieving a PUE under 1.5, exemplifying operational efficiency.
  • Partnerships with content providers and cloud operators are fueling growth, with plans to expand capacity and services into 2026 and beyond.

For years, the data center industry has wrestled with how to define the “edge.” But when Bill Severn, CEO of 1623 Farnam, joined the stage at the 2025 Data Center Frontier Trends Summit in Reston, VA for the panel The Distributed Data Frontier: Edge, Interconnection, and the Future of Digital Infrastructure, he offered a definition grounded in operational reality rather than abstraction: the edge is where eyeballs, networks, and content merge.

It’s a definition that both simplifies and elevates the conversation. The edge isn’t the cell tower. It isn’t a closet-level compute node. It’s the aggregation layer where content routes form, where latency-sensitive workloads anchor themselves, and where network density decides the efficiency of delivery. And increasingly, 1623 Farnam—in Omaha, Nebraska—is proving to be one of the places where this kind of edge actually materializes.

That was the thread that carried through our recent conversation for the DCF Show podcast, where Severn walked through the role Farnam now plays in AI-driven networking, multi-cloud connectivity, and the resurgence of regional interconnection as a core part of U.S. digital infrastructure.

Aggregation, Not Proximity: The Practical Edge

Severn is clear-eyed about what makes the edge work and what doesn’t. The idea that real content delivery could aggregate at the base of cell towers, he noted, has never been realistic. The traffic simply isn’t there. Content goes where the network already concentrates, and the network concentrates where carriers, broadband providers, cloud onramps, and CDNs have amassed critical mass.

In Farnam’s case, that density has grown steadily since the building changed hands in 2018. At the time an “underappreciated asset,” the facility has since become a meeting point for more than 40 broadband providers and over 60 carriers, with major content operators and hyperscale platforms routing traffic directly through its MMRs. That aggregation effect feeds on itself; as more carrier and content traffic converges, more participants anchor themselves to the hub, increasing its gravitational pull.

Geography only reinforces that position. Located on the 41st parallel, the building sits at the historical shortest-distance path for early transcontinental fiber routes. It also lies at the crossroads of major east–west and north–south paths that have made Omaha a natural meeting point for backhaul routes and hyperscale expansions across the Midwest.

AI and the New Interconnection Economy

Perhaps the clearest sign of Farnam’s changing role is the sheer volume of fiber entering the building. More than 5,000 new strands are being brought into the property, with another 5,000 strands being added internally within the Meet-Me Rooms in 2025 alone. These are not incremental upgrades—they are hyperscale-grade expansions driven by the demands of AI traffic, multi-cloud distribution, and increased east-west data movement.

Severn remains bullish on interconnection for the next three to five years. Hyperscalers are already planning deployments several years out and, in some cases, accelerating projects originally slated for 2029 due to emerging regional power constraints. The focus, he explains, is not on CAPEX; companies fighting to win the AI race are not particularly constrained there.

Instead, the friction point is OPEX, where higher interconnection costs relative to retail colo or wholesale environments can elongate sales cycles. Even so, those delays tend to be temporal rather than structural. The projects proceed—they simply take longer to move to revenue.

Multi-Cloud Connectivity in the Midwest

Another area experiencing strong activity is multi-cloud interconnection. Local enterprises increasingly want flexibility and redundancy, often leveraging partners like Megaport to quickly connect into Google’s Central region or Microsoft ExpressRoute in Iowa.

Meanwhile, global enterprises bring a different dimension: their requirements hinge not just on cloud provider but on application-specific zone proximity. This leads them to seek dedicated routes into regions such as Google Central 1 or Microsoft West to balance latency, resilience, or compliance.

The defining trend is a growing sophistication in how regional and global organizations approach cloud adjacency—and the recognition that dense interconnection hubs like 1623 Farnam enable them to optimize routes without unnecessary backhaul or network hairpins.

A 1974 Office Building Turned Efficient Interconnection Engine

It’s easy to forget that the building powering all this activity was constructed in 1974 as an office tower with a bank on the ground floor. Today, it is an 80,000-square-foot interconnection hub with a rolling 12-month PUE of under 1.5—an impressive figure for a retrofit.

That efficiency story rests on several pillars. The facility sources its chilled water from a 100 percent renewable energy provider. Its closed-loop cooling system allows the property to reuse water with less than one percent loss, effectively operating as a near-zero-consumption site.

Rooftop fluid coolers enable free cooling whenever Omaha temperatures dip below 50 degrees, allowing operators to shut down commercial chilled water systems entirely during cool-weather cycles. Inside the building, engineers continually fine-tune flow settings, heat rejection patterns, and mechanical operating points.

The cumulative effect of these “little things,” Severn emphasized, has a meaningful impact on overall efficiency and operational cost.

Partnerships as a Growth Engine for 2026 and Beyond

The conversation turned naturally toward the future, where Severn expects partnership-driven growth to play an outsized role.

CDNs in particular face difficulty justifying new deployments based solely on aggregated eyeballs in certain regional markets. Farnam’s approach has been to co-invest with the right partners and help them bootstrap capacity that quickly benefits both parties.

One example stemmed from a strategic investment in a content company, which was initially provided a 100-gig port on the IX. Within a short time, that presence expanded into several cabinets and more than 600 Gbps of traffic, with plans for additional capacity underway.

The success of that collaboration is shaping Farnam’s approach for 2026, where similar partnerships are expected to add more strategic content to the facility.

AI Arrives—And the Physical Constraints Get Real

AI workloads are no longer future-facing at Farnam—they are arriving in volume. But AI also exposes the limitations of a non–purpose-built structure. Cabinet weight limits, for example, range from 2,500 to 2,800 pounds, while some AI deployments require 5,000-pound racks. The service elevator tops out at 47 cabinets; certain deployments request 48. And cabinet power, traditionally provisioned at around 40 kW, must now adapt to requests for 50 kW.

This has turned AI onboarding into an exercise in collaborative engineering. Farnam’s team works with each AI client to re-engineer footprints, adjust load paths, evaluate structural tolerances, and redesign deployments around the building’s capabilities. The industry may still be in the early innings of AI adoption, but the activity inside Farnam already surpasses early expectations.

The Road Ahead: Fiber, Partnerships, and the Midwest’s Rising Profile

Looking ahead to 2026, Severn sees continued investment in fiber expansion, deeper collaboration with AI and content partners, and an increasingly strategic role for Omaha within the nationwide interconnection fabric. The Midwest is becoming a vital region for linking hyperscale development with distributed AI inference, and 1623 Farnam sits uniquely positioned to serve both.

As AI reshapes the digital landscape and interconnection once again takes center stage, Farnam’s story offers a clear example of how geography, engineering, and ecosystem partnerships are collectively writing the next chapter in America’s regional edge.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sign up for our eNewsletters
Get the latest news and updates
nVent Data Solutions
Source: nVent Data Solutions
Sponsored
Chris Hillyer, nVent's Director of Global Professional Services, explains why uptime is achieved through equipment designed to be serviced quickly when imperfection in high-density...
Schneider Electric
Image courtesy of Schneider Electric
Sponsored
Schneider Electric's Vance Peterson and Gia Wiryawan explain why power distribution and thermal management—not compute—are the bottleneck for operators when supporting NVIDIA'...