In this edition of Voices of the Industry, Raul Martynek, CEO at DataBank, refines the definition of edge computing. This is the first of a three-part series looking at the edge from the multiple perspectives of enterprises using and driving edge development: hyperscalers, SaaS application/ content developers, and network providers. In this article, we examine how hyperscale cloud providers see the edge and the role they are playing in its development.
The IT Pendulum Swings Back…Again
Since the dawn of the computing age, IT infrastructures have swung back and forth between centralized and decentralized formats. The industry first went from centralized mainframes to distributed client/server networks. The cloud then brought us back to centralized computing, and now we are decentralizing once again as edge computing takes over to put compute resources closer to end users.
Just as early computing pioneers such as IBM and Digital Equipment (DEC) drove the initial shift from centralized mainframes to decentralized PCs, so, too, are today’s hyperscale providers driving the shift from centralized clouds to decentralized edge computing. This creates a very different view of the edge.
No Longer Just One Edge: It’s Now Ubiquitous
Driven by the arrival of 5G networks and the promise of IoT and virtual reality applications, the popular conception of the edge that has taken hold is one of modular, micro data centers sitting at the base of cell towers to speed the delivery of content and application data by being “one hop away” from end user devices (i.e. cell phones). However, the truth is, the edge is developing in a much more ubiquitous and multi-modal manner.
In one sense, the edge is geography-specific and will exist everywhere from large tier 1 cities to rural markets. Most video, SaaS, and e-commerce applications will continue to be adequately served by data centers, cloud platforms, and CDNs in tier 1 and 2 metros that deliver 25-75-ms of latency to users hundreds of miles away. The edge is also application-specific, with infrastructure configurations that vary from business to business. A hyperscale or technology provider may need several thousand square feet of data center space and several MW of power, whereas a network or CDN operator may require only a few cabinets or small cage of capacity. Finally, there’s another element of the edge that is performance- and latency-specific. Emerging IoT applications may need sub 10ms latency times, only capable by locating infrastructure within a few miles of end users. It can also be located in highly specific geographic locations, for example, industrial or logistics hubs.
The Role of Hyperscalers in Developing the Edge
While it will take time for the edge to fully develop in all these varying forms, today it’s being driven not by next-generation IoT or virtual reality applications but by hyperscale cloud providers and the customers that fuel their growth. When we speak of AWS, Google or Microsoft, we sometimes forget they are not just three companies, but, in reality, they are platforms supporting the requirements of millions of discrete customers who have a wide range of problems they look to hyperscalers to solve. For SaaS, content companies, and creators of other digital assets, that means delivering bits to as many smartphones, tablets, desktops, and network appliances as possible. Since the volume of those items exist in direct correlation to where populations exist, it’s no wonder hyperscale providers are building availability zones in more population centers. AWS has announced “Local Zones” in four markets while Microsoft has announced plans for Azure Edge zones in three initial markets, all major metros. The hyperscale edge is happening TODAY – in tier 1 and tier 2 markets, not at the base of a remote cell tower. That’s because those cities and their surrounding areas are where most of the consumers of digital content are physically located. This is reminiscent of that famous quote by the notorious bank robber, Willie Sutton, who, when asked why he robbed banks replied, “That’s where the money is.”
At DataBank, we call this environment the near edge or the middle edge. It creates a distinct layer from the far edge as defined by micro data centers beneath 5G towers and in rural settings and it’s where the edge is developing first.
The Hyperscaler Edge Strategy
It is clear that hyperscale providers and the customers they support are moving away from just a handful of regional availability zones to deliver their digital services to a much larger number of locations, perhaps, as many as 25-30, over the next five to ten years. What is not yet clear is what applications and use cases will require such a highly geographic distributed footprint and how application developers and content producers will manage, scale, and make geographic resource allocation decisions in such a fragmented construct.
Regardless, as the cloud and technology providers transition to a local-zone edge strategy, they will look to solve three logistical challenges:
- Metro Diversity – Just as hyperscalers built their initial regional availability zones in tripod groups of three for resiliency, they will look to do the same in the metro or local availability zones into which they expand. This requires a minimum of three data centers in each market and requires a different kind of data center partner – one with multiple facilities in a metro – as opposed to the single-location wholesale data centers hyperscalers have traditionally turned to.
- Network Connectivity – Local availability zones also need access to ample fiber networks and carrier-neutral interconnection hubs in order connect the nodes in that metro to one another, but also to link the local zone to those in other regions. Hyperscalers will seek to locate their local zones in metros that have a diversity of neutral interconnect hubs and fiber paths in and out of the metro. You can expect hyperscalers to gravitate toward data center providers with beach-front property to these key pieces of infrastructure.
- Speed to Market – To meet demand, Hyperscalers have traditionally relied on a mix of building their own facilities and using wholesale data center partners. However, this approach isn’t quick or cost-effective for local zone deployments needing only 1-5MW of power. Therefore, existing enterprise data centers with meaningful capacity in tier 1 and tier 2 metros will provide an attractive third option.
The Smart Approach to Designing the Hyperscale Edge
To solve these challenges, cloud providers can turn to enterprise multi-tenant data center (MTDC) providers, like DataBank, which offer the perfect solution for this hyperscale edge. The leading providers operate more than one facility in each market that span from downtown areas to suburban locations.
This allows hyperscalers to mimic their nationwide three-node availability zone model within a metro and provide redundancy, at the same time moving infrastructure far closer to end users. By deploying across a footprint of tier 1 and tier 2 market facilities, like DataBank’s, services can be delivered to within 50 miles of half the US population.
MTDCs also own and operate secondary interconnect hubs in tier-two markets. These facilities attract additional network providers and create new fiber routes in and out of these metros which creates more connectivity reach akin to tier 1 markets.
Hyperscale cloud providers will also find MTDCs are efficient at building and scaling facilities using smart design templates. This makes it possible to quickly bring new capacity online by deploying data halls in existing facilities or entirely new facilities on adjacent property. MTDCs can also design with a higher density of power for use cases where 52U cabinets with 100kW are required to generate more compute per square foot.
Speed Determined by the Slowest Component
Looking beyond the regional data centers, hyperscalers also need to consider the capabilities of their wireless and fiber networks, which are both essential to the edge fabric. As the pioneer computing architect Gene Amdahl once observed in his famous law, the speed of a system is determined by its slowest component.
That’s why it’s critical for hyperscale edges to integrate with dark-fiber interconnects. This extends IT services through carrier internet exchange points and provides access to cloud on-ramps. With this access, enterprises can quickly tap into partner networks that cloud platform providers, carriers, and enterprise application vendors offer within each region.
We’ll explore that interconnect edge in the next article.
This article was written by Raul Martynek, CEO, DataBank. Mr. Martynek joined DataBank in June of 2017 as the CEO and is a 20+ year veteran in the telecom and Internet infrastructure sector, having held senior positions at several communications and networking companies, as well as asset management firms. Raul earned a BA in Political Science from Binghamton University and received a master’s degree in International Affairs from Columbia University.