Optimizing Ethernet For Speed, Power, Reach, and Latency

Aug. 10, 2022
Anritsu discusses Ethernet usage trends in data center networks. They also explore the technologies helping operators meet growing bandwidth demands and verify network speed, power, latency, and performance.

In a new white paper, Anritsu discusses Ethernet usage trends in data center networks. They also explore the technologies helping operators to meet growing bandwidth demands and verify network speed, power, latency, and performance.

Get the full report

“Growing demand for information has created an explosion in data center traffic,” according to a new white paper from Anritsu. They say this demand is increasing the need for data center architectures to support ever higher Ethernet transfer rates. As operators seek to “optimize Ethernet media types for speed, power, reach, and latency,” they’re being forced to reevaluate some long-held assumptions in these areas, according to the paper.

The authors explain that the need to reduce latency is increasingly important as data centers transform into edge computing networks. They say, “as computing resources move closer to the edge, the latency key performance indicator (KPI) tightens. This KPI is application-service dependent. Latency affects the user experience for applications and must be considered when deploying Ethernet connects.”

As data center network operators move to 400 Gigabit Ethernet and beyond, they will face new challenges such as signal integrity, network interoperability, and maintaining service level agreements (SLAs) for different applications. – Anritsu, “Ethernet in Data Center Networks

To address concerns around power and speed, operators are turning to optical transceivers and high speed breakout cables but, according to the paper, these technologies are not without their challenges. The authors note that “not all 400G Ethernet optics are created equal and their performance on forward error correction (FEC) KPI thresholds varies.” Likewise, high speed breakout cables are less expensive, but have performance and distance issues.

The paper goes on to explain how networking equipment manufacturers are turning to testing solutions to measure the signal integrity of new high speed optical interfaces.

Anritsu also explores how “with multi-access edge computing and network virtualization, data center providers can maintain different SLAs for different applications.”

Download the full report for more information on technologies that can verify network performance at high speeds.

EdgeConneX
Cabinets inside a data hall in an EdgeConneX data center facility. (Photo: EdgeConneX)

Supplying Clean Power: Utilities and Power Grids

Raj Chudgar, Chief Power Officer at EdgeConneX, explains how Carbon-Free Energy (CFE) can diversify the use of clean energy sources and power a carbon-free future for data centers...

White Papers

Dcf Venyu Wp Cover 2022 07 14 8 20 45 233x300

A Data Center Strategy for Future Proofing Business

July 18, 2022
In this white paper, Venyu outlines four common areas of concern for businesses and explains how data center innovations will future-proof business information and infrastructure...