Your Applications Are All Grown Up and Moving Out

Sept. 27, 2021
Applications like cloud services, gaming, ADAS, VR, and social media are requiring more than what a centralized DC can provide them – lower latency, faster connectivity, higher bandwidth, better performance. Danny Gonzalez of Anritsu shares insights on how to test your edge network to ensure it’s delivering what your customers’ applications need.

In this edition of Voices of the Industry, Danny Gonzalez, Business Development Manager at Anritsu, shares insights on how to test the limits of your data center’s edge network to ensure it’s delivering the services your customers’ applications need.

Danny Gonzalez, Business Development Manager at Anritsu

It was bound to happen. What started out as applications run out of our centralized data centers (DCs) have grown up and moved out. Data centers (generally speaking) mimic our own evolution if you think about it. For many of us, our youth and our activities largely took place around the nucleus of our home or family. We were able to test limits and learn new things under the protection of a good support system. Then there came that day when we took that next big step and moved out of that controlled, centralized environment – building a new life on the foundation of our youth. Just like us, while data center applications still have those connections to the core they’ll always carry, they’re now operating as a decentralized data center and living at the edge – with different rules and processes, and where their needs are better met.

Applications like cloud services, gaming, ADAS, VR, and social media are requiring more than what a centralized DC can provide them – lower latency, faster connectivity, higher bandwidth, better performance. To fulfill their promise, they need to move to a distributed DC where they can benefit from edge computing and fulfill their true potential. Yet much like our own evolution, life outside the comforts of home tend to be a lot more work. Deployments must be built to maintain the legacy 10G/25G infrastructure and equipment while supporting the new 100G/400G high-speed networks. With demand increasing, building and verifying these new 100G and 400G networks must happen quickly. While new 400G optical modules are smaller, consume less power, and are higher density, network administrators and field technicians must optimize them at every level (equipment, transceiver, and network) to ensure they meet error tolerance and forward error correction (FEC) parameters. Moving to the edge has given DCs more capability and also more to manage. So how do you get it all to happen cohesively? Once again (like life), you test the limits of your edge network to make sure it’s delivering the services your end users require.

  • Verify each application’s performance as its own service with specific QoS – each service has a given SLA that dictates how the network will respond to it. Some services are low latency while others are bandwidth dependent. While they all travel across the same physical media, they are treated very differently depending on where the edge computing resource is located and virtualized network is created to support it. Once the applications are categorized by service levels, you have the ability to validate the performance of each one.
  • Leverage field-deployed benchmark tests to ensure interoperability and that SLA performance and QoS benchmarks are met – benchmark tests are like using your favorite family recipes – they provide a step-by-step process to ensure each service meets its minimum industry-accepted criteria as it traverses across multiple network equipment vendors, as well as help you troubleshoot any possible issues that might arise.
  • For 5G applications, perform latency sensitive measurements – timing is everything, especially when you’re talking about 5G. After all, 5G latency-sensitive applications are one of the driving factors for edge computing resources. Characterizing that latency performance is a key measurement indicator to a successful deployment. No matter how you make these measurements – whether end-to-end, standalone, and/or benchmark – they are critical in guaranteeing that performance.

Yes, change can be difficult, however, it doesn’t have to be a seemingly insurmountable challenge. The good news is that just as most of us have successfully moved up and onward from the safe confines of our youth, so will our data centers. Much like how we’ve discovered the tools we need to help us successfully navigate life’s evolution, there are tools available that can support you navigating your applications’ move to edge computing.

Danny Gonzalez is a Business Development Manager at Anritsu with over 19 years’ experience in digital and optical transport testing, development, training, and execution. Anritsu’s MT1040A Network Master™ Pro 400G is a versatile, portable solution that will help you seamlessly implement a variety of network deployment testing techniques as well as assist with hardware equipment verification testing. This network testing solution is built to evaluate transport operating at speeds ranging from 10 Mbps to 400 Gbps – making sure that you can support your legacy foundation while moving on to implement new edge computing resources.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...


Unpacking CDU Motors: It’s Not Just About Redundancy

Matt Archibald, Director of Technical Architecture at nVent, explores methods for controlling coolant distribution units (CDU), the "heart" of the liquid cooling system.

White Papers

Dcf Schneider64 Wp Cover 2022 02 10 9 39 54 230x300

Why Data Centers Must Prioritize Environmental Sustainability: Four Key Drivers

Feb. 11, 2022
Energy efficiency has been a top priority for data center colocation providers over the past twenty years. Now, sustainability is a priority. In this white paper, Schneider Electric...