Optimizing Ethernet For Speed, Power, Reach, and Latency

Aug. 10, 2022
Anritsu discusses Ethernet usage trends in data center networks. They also explore the technologies helping operators meet growing bandwidth demands and verify network speed, power, latency, and performance.

In a new white paper, Anritsu discusses Ethernet usage trends in data center networks. They also explore the technologies helping operators to meet growing bandwidth demands and verify network speed, power, latency, and performance.

Get the full report

“Growing demand for information has created an explosion in data center traffic,” according to a new white paper from Anritsu. They say this demand is increasing the need for data center architectures to support ever higher Ethernet transfer rates. As operators seek to “optimize Ethernet media types for speed, power, reach, and latency,” they’re being forced to reevaluate some long-held assumptions in these areas, according to the paper.

The authors explain that the need to reduce latency is increasingly important as data centers transform into edge computing networks. They say, “as computing resources move closer to the edge, the latency key performance indicator (KPI) tightens. This KPI is application-service dependent. Latency affects the user experience for applications and must be considered when deploying Ethernet connects.”

As data center network operators move to 400 Gigabit Ethernet and beyond, they will face new challenges such as signal integrity, network interoperability, and maintaining service level agreements (SLAs) for different applications. – Anritsu, “Ethernet in Data Center Networks

To address concerns around power and speed, operators are turning to optical transceivers and high speed breakout cables but, according to the paper, these technologies are not without their challenges. The authors note that “not all 400G Ethernet optics are created equal and their performance on forward error correction (FEC) KPI thresholds varies.” Likewise, high speed breakout cables are less expensive, but have performance and distance issues.

The paper goes on to explain how networking equipment manufacturers are turning to testing solutions to measure the signal integrity of new high speed optical interfaces.

Anritsu also explores how “with multi-access edge computing and network virtualization, data center providers can maintain different SLAs for different applications.”

Download the full report for more information on technologies that can verify network performance at high speeds.

About the Author

Kathy Hitchens

Kathy Hitchens has been writing professionally for more than 30 years. She focuses on the renewable energy, electric vehicle, utility, data center, and financial services sectors. Kathy has a BFA from the University of Arizona and a MBA from the University of Denver.  

Sponsored Recommendations

From modular cooling systems to enterprise-wide energy optimization, this quick-reference line card gives you a snapshot of Trane’s industry-leading technologies built for data...
Discover how Trane’s CDU delivers precise, reliable liquid cooling for mission-critical environments. Designed for scalability and peak performance, it’s a smart solution for ...
In this executive brief, we discuss the growing need for liquid cooling in data centers due to the increasing power demands of AI and high-performance computing. Discover how ...
AI hype has put data centers in the spotlight, sparking concerns over energy use—but they’re also key to a greener future. With renewable power and cutting-edge cooling, data ...