The Potential of Dynamic Fiber Cross Connects in a Data Center Campus

July 6, 2020
Bob Shine, VP of Marketing and Product Management at Telescent, explores how to handle and scale growing cross connects in a data center campus, using dynamic fiber cross connects. 

Bob Shine, VP of Marketing and Product Management at Telescent, explores how to handle and scale growing cross connects in a data center campus, using dynamic fiber cross connects. 

Bob Shine, VP of Marketing and Product Management, Telescent

Data traffic has grown exponentially due to the proliferation of mobile phones and new data intensive applications.  While often invisible to the end user, multi-tenant data centers are a key component to allow access to this range of data and applications.  Each icon on your smart phone represents access to data from different enterprises likely stored at data centers spread across the country with connectivity provided by a range of cloud or carrier service providers.

As demand for connectivity has grown with the proliferation of new applications, what might have started out as a single multi-tenant data center building in a city has grown to include multiple buildings on a campus.  In some of the most popular locations for data centers such as Ashburn, Va., or Frankfurt, Germany, data center campuses can include more than 10,000 cross connects spread out over 10 or more buildings.  And while the number of cross connects refers to the connections between two companies, if company A is in Building 2 on the campus while company Z is in building 8, implementing the cross connect between A and Z will require multiple individual connections.  It is not uncommon to require six hops or more to make the cross connect between the customers.

Implementing the cross connections is still often done manually, involving many steps and individuals.  This ranges from processing the work order to obtaining a cable to identifying the physical location(s) for the patch panel to testing the connection and finally documenting the changes in an offline database.  These steps are why a typical service level agreement (SLA) for implementation of the cross connect can range from three to seven days.  Dynamic fiber cross connect systems are now available that can remotely and automatically perform a cross connect in minutes, allowing significant savings in OpEx for the data center operator.

Figure 2: With its high-port-count, pay-as-you-grow design, only a single Telescent NTM is required to offer 1,000 connections. (Image: Telescent)

To address this need, several companies have developed semi-automatic or fully automatic cross connect solutions.  The number of cross connects per system ranges from 144 and 360 up to the largest system offering 1,008 ports.  As would be expected, scaling to 10,000 cross connects on a campus environment is made much easier with the largest-port-count, fully-reconfigurable systems.

To understand the scaling of dynamic fiber cross connects (DFCC) across a campus, the first step is considering a single meet-me-room (MMR) in the data center.  For discussion purposes let’s assume a MMR with 1,000 connections within the building.  For a low-port-count system of course multiple of these systems will be required, but the complexity grows significantly with the requirement to allow any-to-any connectivity.  Since a key value of the data center is to allow connectivity, offering any-to-any connectivity is a key requirement.  Using a leaf-spine network architecture to maintain any-to-any connectivity between the customers, up to half the ports in every system must be devoted to trunking between systems and are not available for performing customer cross connects.  As a specific example, to provide any-to-any connectivity using a 200-port count DFCC requires 15 systems in a leaf-spine architecture, as shown by figure 1.

Figure 1: The need for a >1,000 port count DFCC is shown by examining the complexity of considering a MMR with just 1,000 connections using a smaller port count DFCC. For a 200-port DFCC to allow any-to-any connectivity for a meet-me-room with 1,000 connections requires 15 systems in a leaf-spine architecture. (Image: Telescent)

In contrast, systems like Telescent NTM with 1,008 connections will handle the cross connects in a single meet-me-room very easily, needing just one system for the example above.  Also, the Telescent NTM allows a pay-as you-grow design in cases where the MMR is not yet fully populated.  The system can be configured with a smaller number of ports and then expanded when the need for more cross connects arises.  This expansion can be done without affecting traffic on the existing cross connects, and then allows full any-to-any connectivity for all the connections in the upgraded system.

As mentioned at the beginning of this article, data center campuses now involve several buildings, and each one may have several floors.  Let’s now consider scaling to 10,000 connections using low- and high-port-count cross connects, again using the leaf-spine approach to networking.  For the 200-port count DFCC, 150 systems are required; 100 in the leaf with 50 systems in the spine.  However, with this leaf-spine approach only 2 trunk lines are connected to each tier-2 spine DFCC, which will create contention for connections across the network.  In contrast, only 30 of the 1,008 port count Telescent NTMs are required to provide any-to-any connectivity across a campus with 10,000 connections.  With 50 trunk lines to each of the Tier 2 spine systems, the chance of contention across the network is minimal.

Of course, more efficient topologies requiring fewer Telescent NTMs can be considered if the cross connect requirements can be segmented among customer types or service providers.  Another consideration that reduces the number of Telescent NTMs required is to understand the percentage of connections that stay within a single building versus traversing multiple buildings on the campus.  But the management of the requested cross connects will always be easier for a large-scale system such as the Telescent 1,008 Network Topology Manager than for lower port count systems.

While not directly related to scaling, another benefit of a reconfigurable cross connect system is that it does not leave any stranded capacity.

Recently, companies have developed products to help manage the coordination of multiple systems across a campus environment.  Telescent has developed its Orchestrator software.  This software controls a network of Telescent systems by tracking the connectivity between systems and offering a best-path option for connections across the campus.  The best-path option can be static based on lowest-loss through multiple systems or dynamic based on capacity utilization of the different DFCCs.

While not directly related to scaling, another benefit of a reconfigurable cross connect system is that it does not leave any stranded capacity.  Since an operating system like this will mark each port as in use or available, when a cross connect is removed this port is now marked as available for future connections with that enterprise or carrier.  In contrast, a manual system likely involves recording the state of the cross connect in several different offline databases.  The database will only be as good as the operators entering the data, and error will creep in the database over time.  Any error in the database represents stranded capacity and lost revenue opportunity for the data center operator.

Bob Shine is the VP of Marketing and Product Management at Telescent

The Telescent G4 Network Topology Manager (NTM) is an example of an innovative DFCC that allows software control of the physical layer while scaling to address the needs of large data center campuses.  With the pay-as-you-grow design combined with the large-port count per system the Telescent NTM can easily scale from a few hundred to 10,000 connections.  Once made, the connections are equivalent to existing fiber patch panel connections with low loss and are fully latched, allowing traffic to continue uninterrupted as the system is upgraded.   The Telescent NTM has passed NEBS Level 3 reliability testing as well as multiple vendor-specific qualification tests which have demonstrated a >10 year lifetime.  Multiple NTMs systems can be managed through software control offering scaling to 10,000 cross-connects and beyond with machine accurate record keeping and limiting stranded capacity.  

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Runawayphill/Shutterstock.com
Source: Runawayphill/Shutterstock.com

How A Cross-Company Initiative Is Telling the Story of a New, Collaborative Future for Data Centers

A group of the industry’s top talent have been working as a special task force to address a pivotal issue facing the data center sphere at large. With their upcoming publishing...

White Papers

Download the full report.

PCIe® 6.0: Testing for a New Generation

Aug. 1, 2021
This white paper from Anritsu outlines the enhanced PCIe 6.0 technologies, such as PAM4, Forward Error Correction (FEC) and link equalization. It also provides guidelines on selecting...