Facilitating the Transition from Core-to-Edge to Edge-to-Core Computing

June 4, 2018
In this edition of Voices of the Industry, Martin Olsen, vice president, global edge and integrated solutions at Vertiv, explores what is needed to transition from core-to-edge to edge-to-core computing. 

In this edition of Voices of the Industry, Martin Olsen, vice president, global edge and integrated solutions at Vertiv, explores what is needed to transition from core-to-edge to edge-to-core computing. 

Martin Olsen, vice president, global edge and integrated solutions at Vertiv

While there is evidence in the industry that the network edge is growing, we are just beginning to deal with the impact of that growth on network architectures and the infrastructure that supports them.

A number of analysts and industry stakeholders have projected the number of connected devices the edge will need to support. Cisco puts the number at 23 billion by 2021, while Gartner and IDC predict 20.8 billion and 28.1 billion by 2020 respectively. Also gaining attention are high-profile and interesting applications these connected devices will enable, such as smart factories and autonomous vehicles.

But these are just isolated snapshots of the impact of edge computing. When you take a step back and look at the edge holistically, it becomes clear that supporting the future of edge applications will require major changes in the critical infrastructure outside the core data center to a focus on edge-to-core computing. In many cases, this will include a shift from the current computing model that sees most data flowing from core-to-edge, to a model that reflects more interaction and more movement of data from edge-to-core.

Despite the magnitude of its impact, there exists today a lack of clarity associated with the term edge computing and all that it encompasses.

Consider the example of a similarly broad term: cloud computing. When IT managers make decisions about where their workloads will reside, they need to be more precise than “in the cloud.” They need to decide whether they will use an on-premises private cloud, hosted private cloud, infrastructure-as-a-service, platform-as-a-service or software-as-a-service.

The clarity that has evolved around cloud computing does more than facilitate communication; it facilitates decision making.

Edge computing has the potential to reshape the network architectures we’ve lived with for the last twenty years.

Is the same possible for edge computing? To answer that question, Vertiv, in collaboration with an independent, third-party research organization, conducted an extensive audit of the existing and emerging edge use cases. We identified more than 100 cases during our initial analysis. Then, we honed that list down to the 24 that represented the fastest growing, the most critical or had the broadest impact for a more in-depth analysis.

These use cases ranged from content distribution to autonomous vehicles to augmented reality. The question we then had to address was, were these totally distinct applications, or were there common characteristics that would allow them to be classified in a way that would be meaningful to an IT decision maker?

Answering that question involved analyzing the performance requirements of each use case in terms of latency, bandwidth, availability, and security. We also evaluated the need to integrate with existing or legacy applications and other data sources as well as other factors that impact the ability to support the use case.

What emerged was the recognition of a unifying factor that edge use cases could be organized around.

Edge applications, by their nature, have a data-centric set of workload requirements. This data-centric approach, filtered through requirements for availability, security and the nature of the application, proved to be central to understanding and categorizing the various use cases.

The result was the identification of four archetypes that can help guide decisions regarding the infrastructure required to support edge applications. These four archetypes are:

  • Data Intensive, which encompasses use cases where the amount of data is so large that layers of storage and computing are required between the endpoint and the cloud to reduce bandwidth costs or latency.
  • Human-Latency Sensitive, which includes applications where latency negatively impacts the experience of humans using a technology or service, requiring compute and storage close to the user.
  • Machine-to-Machine Latency Sensitive, which is similar to the Human-Latency Sensitive archetype except that the tolerance for latency in machines is even less than it is for humans because of the speed at which machines process data.
  • Life Critical, which are applications that impact human health or safety and so have very low latency and very high availability requirements.

These four archetypes represent just the first step in defining the infrastructure needed to support the future of edge computing. But it is not one that should be understated. When we shared the archetypes with industry analyst Lucas Beran of IHS Markit, he commented that, “The Vertiv archetype classification for the edge is critical. This will help the industry define edge applications by characteristics and challenges and move toward identifying common infrastructure solutions.”

Edge computing has the potential to reshape the network architectures we’ve lived with for the last twenty years. Working together, we can ensure that process happens as efficiently and intelligently as possible.

Martin Olsen is vice president, global edge and integrated solutions at Vertiv. For a more detailed discussion of edge archetypes, read the report, Four Edge Archetypes and their Technology Requirements.

Explore the evolving world of edge computing further through Data Center Frontier’s special report series and ongoing coverage.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

SeventyFour / Shutterstock.com

Improve Data Center Efficiency with Advanced Monitoring and Calculated Points

Max Hamner, Research and Development Engineer at Modius, explains how using calculated points adds up to a superior experience for the DCIM user.

White Papers

Dcf Se Wp Cover2021 12 08 9 10 31

Guide to Environmental Sustainability Metrics for Data Centers

Dec. 13, 2021
As more and more companies are reporting on their Environmental, Social, and Governance (ESG) programs, there’s a need for standardized sustainability metrics, especially in the...