Data Center Fiber at Gigawatt Scale: A Talk With Quantum Loophole CEO Josh Snowhorn

April 21, 2023
The Cabling Podcast sits down with Josh Snowhorn, founder and CEO at Quantum Loophole, to discuss the massive QLoop fiber ring project under the Potomac.

Data centers are more than just the sum of their parts and there are far more pieces in play than most people outside the industry would consider. Connectivity is one of the key functions external to the data center that enables effective operation, and as such, we focus significant efforts in providing coverage of the activities of both the technology and the industry supporting these options.

Josh Snowhorn is founder and CEO at Quantum Loophole, an operator of data center campuses in the gigawatt scale. At DCF we've been providing regular updates on Quantum Loophole and its vision for "data center cities" since the company came out of stealth in early 2021. The company says it has already leased a massive 240 megawatts of capacity at its campus in Adamstown, Maryland.

Central to the entire undertaking is QLoop, the 43-mile hyperscale fiber ring connecting Quantum Loophole's 2,100+ acre data center development site in Frederick County, Maryland to the Data Center Alley connectivity ecosystem around Ashburn, Virginia.

In a recent edition of The Cabling Podcast, our sister publication Cabling Installation & Maintenance checked in with Snowhorn for an update on the construction of data center and fiber conduit infrastructure.

Snowhorn said the projects is the largest medium haul fiber backbone that’s ever been created. The QLoop network ring network ring offer capacity for more than 200,000 strands of fiber connecting to the Ashburn ecosystem in under one half millisecond Round Trip Time (RTT). "And we are bolstering that with some pretty amazing cross-connect capabilities," he added. "Each property will have access to conduits and thousands of strands of fiber directly into the QLoop system to enable seamless, private and secure connectivity for all of our campus-wide customers.”

Here's the podcast with CIM host Matt Vincent, followed by a summary of the key discussion points.

Gigawatt-Scale Data Center Fiber

A frequent speaker at industry conferences, Snowhorn's key founding and executive positions include time at Terremark, Verizon, Cincinnati Bell, and CyrusOne. Snowhorn founded the Global Peering Forum, the annual meeting for the Internet interconnection and peering community, where he serves on the board of directors. He also serves on the advisory board of Telescent, a maker of automated data center interconnection machines.

As the discussion begins, Snowhorn provides an update on (1:46) underground fiber infrastructure linking up Quantum Loophole's Frederick data center campus' network center 1 and 2 sites, "which are kind of like giant, horizontal meet-me rooms, multiple acres themselves. We have up to 60 conduits in some cases interconnecting the two network centers."

Of the QLoop (2:27) fiber ring project, Snowhorn adds:

"We have completed our south Potomac river boring, which was over 3000 ft., and it goes 91 feet below the bedrock of the Potomac. That was a huge, 26-in. HDPE sleeve that was pulled through, and then 34 two-inch ducts inside of that. That's been completed and approved and vaults have been put in place. Terrestrial construction has started with multiple crews working laying in the 34 ducts, buried deeply to accomodate the most extreme security standards."

Of his company's partnership with Aligned Data Centers, Snowhorn noted that (3:43) "at Quantum Loophole, we don't build data centers ourselves. We supply land, energy, water, and fiber, or conduit. We call those 'the elements of the data center business.' Aligned is our first customer, they have closed on the acquisition of 75 acres of land, and I believe they're well underway with their permitting process and preparing to construct, hopefully, their first building by the end of the year."

Snowhorn also discussed (4:24) the origin and founding of his company.

"The impetus for creating Quantum Loophole is really that we saw a gap in the industry -- there wasn't anybody providing just those baseline services, and entitlement preparation of site so others can go and just have an 'easy button' to build their data centers. We felt like we had a unique offering and it's proven to be true, we have lots of company demand, and lots of big-scale entities going in. We've sold several conduits already, so that's already well underway."

At the root of Quantum Loophole's stated "dig once" approach, Snowhorn said (5:20):

"You're really trying to just prevent cuts, prevent what happens in the famous backhoe pictures you see, where they tear up a bunch of duct. In our case, we can hold over 235,000 strands of fiber in our primary system; the whole thing is designed for 6912-fiber trunks -- so imagine a backhole pulling that up and the splicing hell that would ensue afterwards to repair it. So dig once means, put it all in place and hopefully we don't overbuild, and hopefully we don't underbuild."

Splicing Skills and Automated Connections

Later in the interview (6:36), Matt asks how working with a high-fiber-count cable like 3456 is different from working with 144-fiber cable (which the world used to consider high-fiber-count), and if installers had to to "level up" on their splicing skills or any other installation skills to get comfortable working with this high-fiber count fiber.

QLoop's press coverage mentions (7:51) that its network infrastructure has a radius that will accommodate 6912-fiber availability. In response to this info, we asked Snowhorn if his teams have worked with that fiber count yet; and if so, if there's been a noticeable difference between it and 3456-fiber cable.

Snowhorn also shared (9:07) his insights regarding the value proposition of automatic, robotic cross connections powered by Telescent, as installed at Quantum Loophole's Maryland data center campus. 

"It really comes out of a need to change how people do interconnection. Your classic way of doing it is either truck rolls to a remote hut, or having 24x7 staff with tickets open, and hopefully no RX/TX reversals in play. But when you start thinking about the scale of what we're building, with more fiber strand count coming into a single location than anyone's ever seen, that really starts to bring in a need for change. The Telescent machines do something quite unique in that we can give the control of the interconnection to the client, so they can use a portal via an API, log in, and enable a cross-connect in 2 minutes without ever touching it as long as the machine has been pre-patched.

The bigger picture of that is they have the same machine on their campus and building, and then maybe they have the same machine located in Ashburn and Manassas, let's say 20 buildings, and they want to enable an interconnection. The machine has a built-in OTDR, they can punch light out, they can verify the connection and do 20 connections at the same time, all in 2 minutes, and batch a job. They can do 1,000 connections in 2,000 minutes theoretically, across 20 locations, without a human being touching it. That's game-changing. That really creates flexibility for outages, creates hopefully a reduction in truck roll and labor costs, and a more rapid delivery of interconnection."

As the QLoop is such a notably large outside and inside plant undertaking for data center construction, to close out the podcast (11:30) we asked Snowhorn for a recap of the project's specifications and a preview of what's in store for the rest of the year. In response, Snowhorn said:

"It's literally hundreds of sites, hundreds of data centers, but we don't actually touch those other data centers once we get into Loudon County. We cross the Potomac, which is insanely hard to do: I now know why nobody tries to do it, becuase it's that hard. It's just been nothing but a struggle to get it done, but we're doing it. I don't think anybody's going to try and do it again for a long time. Machines blowing up, costs, the approvals -- going 9 stories below the bedrock of the Potomac is insane. The costs are through the roof -- I cannot think of a single thing that was easy about what we've done.

We drop down [and] have over 500 vaults on the 43-mile ring, and those vaults are designed to create a massive intersection of splice points, so that people can tie into the system. We're a wholesaler to the wholesalers, so our goal was not to go build throughout the entire Ashburn corridor and interconnect every building and be another competitive carrier. We wanted to be a support mechanism to create an expansion of that ecosystem."

This article originally appeared on Cabling Installation and Maintenance, an Endeavor Business Media partner site. Additional reporting was provided by Matthew Vincent, Senior Editor, Cabling Installation & Maintenance.

About the Author

DCF Staff

Data Center Frontier charts the future of data centers and cloud computing. We write about what’s next for the Internet, and the innovations that will take us there.

Sponsored Recommendations

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Conduit Sweeps and Elbows for Data Centers and Utilities

Data Centers and Utilities projects require a large number of electrical conduit sweeps and elbows. Learn why Champion Fiberglass is the best supplier for these projects.

Prefabricated Conduit Duct Banks Enable Smooth and Safe Electrical Installation for a Data Center

Prefabricated conduit duct banks encourage a smooth, safe electrical conduit installation for a data center.

Image courtesy of Submer
Image courtesy of Submer

The Future of Data Center Cooling: Addressing Jitter and Thermal Inconsistencies

Ryan Howard, Solution Engineer at Submer, explores the impact of jitter and thermal inconsistencies on various sectors and how innovative cooling solutions can improve efficiency...

White Papers

Get the full report

Hybrid Cloud

Nov. 14, 2021
It’s not a matter of choosing or favoring either data center, cloud, or colocation, but selecting the most appropriate tool for the application, workload and desired outcome. ...