The Switch of the Future? Silicon Photonics in Action

March 21, 2017
Intel and Barefoot Networks show off an Open Compute switch with 65 silicon photonics optical modules, creating a programmable switch with a top end of 6.5 terabits a second.

SANTA CLARA, Calif. – The rising tide of data moving through cloud data centers is boosting interest in next-generation networking technologies like silicon photonics. One of the leading advocates of the potential of silicon photonics has been Intel, which showed off a live traffic demo of its technology running in its booth at the recent Open Compute Summit.

Rather than traditional network cables, the device has Intel optical transceivers plugged into the front of the Open Compute Project Wedge 100B switch, with the 65 optical modules creating a programmable switch with a top end of 6.5 terabits a second. By comparison, commercially available switches currently top out at 3.2 Tb/s.

The floor unit at the Open Compute Summit illustrates the potential for new technologies to disrupt data center networking, accelerating the transition to 100Gb and eventually 400Gbps networks.

Silicon photonics uses light (photons) to move data at very high speeds over a thin optical fiber rather than using electrical signals over a copper cable. The optical fiber is directly built into semiconductor chips to create “computing at the speed of light.”

It’s not exactly a new technology, as Intel has been developing silicon photonics for more than 16 years. Intel first showcased the technology at the Open Compute Summit in 2013, seeking to generate interest from the hyperscale players that will be the earliest adopters. It wasn’t until June 2016 that Intel launched volume production of commercial products.

Making Switches Programmable

The demo at the Intel booth was powered by chips from Barefoot Networks, a startup founded by Nick McKeown, a pioneer in software-defined networking (SDN) and founder of Nicira and the Open Networking Foundation. Barefoot’s Tofino networking ASIC chips can be customized using the P4 programming language, which Barefoot developed and has open sourced. An ASIC (Application Specific Integrated Circuit) is a chip that that can be tailored for tasks like network management.

Barefoot, whose backers include Andreessen Horowitz and Goldman Sachs, made plenty of waves when it came out of stealth last June with the industry’s first programmable switch, which Light Reading called “an innovation that may have profound ramifications for networking in general and data centers in particular.” Wired predicted Barefoot’s Tofino chip “will alter the inner workings of Google, Facebook and Microsoft.”

Those companies are all major players in the Open Compute Project, the open hardware non-profit creating technology for hyperscale computing. The rapid growth of cloud computing reinforces the need for faster networks with more capacity, which was a key discussion topic at a recent Infrastructure Masons summit of the largest data center operators. While the enterprise world is preparing to graduate from 40Gbps to 100Gbps switches, the cloud builders are already focused on scaling to 400Gbps networks.

Wedge Meets Silicon Photonics

Intel and Barefoot are two of the companies hoping to play a major role in this transition.

A closer look at the live traffic demo at Intel’s booth at the Open Compute Summit in Santa Clara, featuring 65 Intel silicon photonics transceivers and a programmable Tofino ASIC from Barefoot Networks. The result is an OCP Wedge 100 switch with capacity of 6.5 terabits per second. (Photo: Rich Miller)

At the Open Compute Summit, Barefoot brought its new chips together with Intel’s silicon photonics in a live traffic demo of a 2U switch with 65 ports for QSFP28 (Quad Small Form-factor Pluggable) 100GB optical transceivers. The system featured Intel 100GB transceivers for both PSM4 (Parallel Single Mode Fiber 4-lane) and CWDM4 (Coarse Wavelength Division Multiplexing 4-lane).

Barefoot has contributed two of its Wedge 100 designs to the OCP, including a 1U 3.2Tb/s model and the 2U 6.5Tb/s unit being demonstrated with Intel, which says future models will use 400Gbps silicon photonics technology.

“The Open Compute Networking Project is excited to see Barefoot Networks share two Wedge 100B hardware designs with the community,” said Omar Baldonado, OCP Networking Project Co-Lead. “We look forward to seeing the new innovations enabled by these Wedge 100B designs and the flexibility that their programmable switching silicon brings to the industry.”

“With Wedge 100B platforms, the OCP ecosystem, network owners and architects have unprecedented access to a fully disaggregated networking stack down to the forwarding plane, enabling them to build networks that best suit their needs,” said Martin Izzard, Co-Founder & CEO of Barefoot Networks.

The Road Ahead

As hyperscale providers yearn for new and better network technology, there is no shortage of vendors seeking to meet their needs. Barefoot is taking on BroadCom, the leading incumbent in the market for high-speed networking chips, which recently introduced a new switch based on its Tomahawk ASIC that offers 64 ports of 100Gb Ethernet. Cavium’s XPliant chips also offer programmability, as do Teralynx chip from startup Innovium.

Meanwhile, some of the largest cloud players are customizing their own networking silicon. Amazon Web Services recently revealed its new Annapurna networking chip, an ASIC that will enable it move data faster across its huge data center network. Google reportedly has also developed an in-house chip to optimize its vast networking infrastructure.

Meanwhile, Intel has challengers in silicon photonics, including Inphi, Mellanox, Oclaro Luxtera, Ciena and Infinera. Intel says its technology has advantages over its rivals, and Microsoft has said publicly that it is test-driving Intel’s silicon photonics in its cloud infrastructure.

So the game is on. Open Compute may have offered a tantalizing glimpse of some of the network technology that will matter in the hyperscale realm.

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

Get Utility Project Solutions

Lightweight, durable fiberglass conduit provides engineering benefits, performance and drives savings for successful utility project outcomes.

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

JDzacovsky/Shutterstock.com

Coolant Distribution Units: The Heart of a Liquid Cooling System

nVent's Abhishek Gupta explains why CDUs are at the core of driving the efficiencies that liquid cooling can bring to data centers, so choosing the right one is critical.

White Papers

Dcf Sesr Cover 2022 05 19 10 38 01 231x300

The Software-Defined Bottom Line

May 23, 2022
Over time, data center infrastructure has become infinitely more complex and more distributed. This special report, courtesy of Schneider Electric, explores the evolution of software...