Executive Roundtable: What’s Next for Hyperscale Computing?

Sept. 21, 2020
In our 20th Data Center Executive Roundtable, six data center authorities assess the accelerating market for hyperscale computing, and how developers, service providers and the supply chain can keep pace.

Welcome to our 20th Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. Our Third Quarter 2020 roundtable offers insights on four topics: The evolution of hyperscale computing, the impact of new AI chips on rack power density, trends in interconnection, and how the COVID-19 pandemic is prompting innovation in data center management.

Here’s a look at our distinguished panel:

The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier. Each day this week we will present a Q&A with these executives on one of our key topics. We begin with a look at our panel’s outlook for hyperscale computing.

Data Center Frontier: More providers are targeting the hyperscale computing market, and more customers appear to be “graduating” to super-sized requirements. How is this market changing, and what are the keys to success in serving the hyperscale sector in 2020 and beyond?

TONY BISHOP, Digital Realty

Tony Bishop: Hyperscale computing models are growing at a rapid rate – and for good reason. We predicted 2020 would be an important year for hyperscale growth and so far, this trend is playing out across the U.S., Europe and APAC. As the demand for cloud and big data continue to accelerate, enterprises need resources that can easily scale with their capacity and network bandwidth requirements.

The COVID-19 pandemic accelerated this trend even more. With the sudden move to remote work, we’ve started to see enterprises increasingly rely on cloud computing to support distributed operations and sustain business continuity. To meet the networking demands required today, data center investment is growing in areas where they can be in close proximity to large hubs for tech and cloud companies and closer to where data is exchanged.

For example, the Toronto market is emerging as a key connectivity and tech hub in North America.

To meet this area’s growing needs, we just announced the expansion of our One Century Place facility to help customers the flexibility, capacity and performance needed to serve customers in this region, and globally. This is just one example of how the hyperscale growth trend is playing out. But to drive success in this sector in 2020 and beyond, providers will need to continue to make investments in capabilities that lie in close proximity to where end-users’ demands are.

Organizations today require fast, reliable access to critical applications and data, meaning interconnectivity between hyperscale data centers needs to be resilient and readily available to ensure businesses can provide the reliability customers demand. To successfully serve the hyperscale sector, providers should lean on an ecosystem of partners to deploy IT infrastructure in close proximity to data to support the reliability, resilience and scalability needed for today’s digital business.

KEVIN FACINELLI, Nortek Air Solutions

Kevin Facinelli: The data center movement toward 30 MW and more recently to super hyperscale 100 MW facilities were once associated with just large social media captive assets, but now we see colocation operators and other parts of the industry increasing to these sizes, too.

Consequently, these operators are becoming increasingly concerned with operational costs they typically pass on to tenants. As colocation competiveness increases, data center tenants are choosing operators that employ the most sustainable initiatives to keep costs down. So, sustainability is equally important as uptime and efficiency. Sustainability is increasingly coming into play as tenants plan future expansions within the facility, especially when it entails an entire floor or a majority of a building.

The last few years colocation providers were trying to accommodate the industry’s exponential growth and demand with buildouts within their applications. Now, PUE and WUE are serious sustainability issues they want to improve during retrofits and new construction.

For new facilities, geographical positioning in terms of water and power availability, and climate is critical. Geographic positioning also affects choices for interconnection to intra-company data centers to minimize latency and enable the redistribution of IT loads to nearby locations during peak periods.

Geographical choices and operating costs are key to sustainability. Therefore, many data center operators are looking for liquid cooling equipment that can operate with the lowest PUE, WUE and if possible, take advantage of geographical climates. Geographical location availability becomes significantly more abundant with liquid cooling equipment that offers a variety of operational modes, such as evaporative, adiabatic, super evaporative and others. New technological advances in liquid cooling equipment can automatically switch to the most sustainable operational modes, depending on the ambient temperature and humidity variances throughout the hour, day, week or year.

JAIME LEVERTON, iMasons and eStruxture Data Centers.

Jaime Leverton: As scalable and affordable cloud solutions are becoming critical for companies of all sizes and especially enterprise-level, there is intense demand for these services. Many enterprises seek to leverage the benefits of applications that rely on AI, Machine Learning, IoT – all of which come with large amounts of data that need to be processed in real-time. This, in turn, triggers hyperscale users and cloud providers to expand at an unprecedented pace in order to meet customer demand in a time- and cost-efficient manner. Hyperscalers and cloud providers are undeniably the major growth drivers in the wholesale data center segment.

As a wholesale data center provider, we believe the keys to success are:

  • Speed and scale. For hyperscalers and cloud providers to be able to accommodate the current onslaught of data, they need immediate access to scalable capacity and fast deployment. Modular data center facilities that can rapidly scale and adapt are the best option.
  • Edge locations. Being physically located close to the end-user is paramount. IoT specifically is changing the way data is being transferred. Instead of data being transferred from a central data center to the enterprise, massive amounts of smaller data packets are now transferred to edge locations for processing. Which brings us to our next point.
  • Connectivity and High-Availability. Inbound data center bandwidth requirements are growing exponentially. All the latest technologies come with ultra-low latency requirements and an insatiable demand for bandwidth.
  • Cost-efficiency. The economies of scale achieved by leveraging wholesale data center providers are simply unmatched.
  • Sustainability. Environmental sustainability has become a major competitive priority for all major cloud providers as they try to win the confidence of customers and governments alike. Facilities that are highly energy-efficient and are powered by clean, renewable energy are best suited to serve this market.


Juan Font: Over the last couple of decades, we have seen the evolution towards larger and higher density requirements. That was the main driver for our strategy evolution from just owning and operating carrier hotels into developing large-scale campuses tethered with high-count fiber to our interconnection hubs. This concept was first executed with LA2 in Los Angeles (which is tethered to LA1/One Wilshire), and now you can see it throughout our portfolio with multi-data center campuses in places like Chicago, Santa Clara or Virginia.

Hyperscale, however, is a more recent phenomenon driven primarily by the massive infrastructure required to build Cloud Service Provider (CSP) availability zones, as well as the exponential growth of the digital economy as manifested by social media, SaaS providers and gaming applications, for example. This has significantly altered the landscape in markets like Northern Virginia, which has recorded unprecedented levels of absorption and the corresponding construction to satisfy demand.

With significant ground-up development over the last couple of years, CoreSite is well positioned with vast amounts of contiguous available and developable capacity in nearly all of our markets. Our focus on hyperscale opportunities are those that would benefit from superior performance, latency or edge proximity characteristics.


Phillip Marangella: EdgeConneX could be considered one of those providers. As a pioneer of Edge data centers with a global platform, we have recently built several large hyperscale facilities in a number of core data center markets around the world. The reason for this evolution is that our web and hyperscale customers define their Edge in terms of locations, scale, and density. EdgeConneX simply enables their Edge by building what they want, where they want it and when they want it.

The market is evolving as you see both ends of the spectrum rapidly scaling up and out with more investment in core infrastructure on the one hand, while yet simultaneously experiencing rapid growth in more distributed Edge deployments. Therefore, having the ability to support the whole spectrum of data center requirements from hyperlocal to hyperscale in an integrated and seamless fashion is an important one and a key to future success.


Angie McMillin: We are definitely seeing growth in the hyperscale market and this sector does have some distinct characteristics that suppliers need to be prepared to support. Like other sectors they are very focused on cost and efficiency, but they are more willing to push the envelope through experimentation and innovation to achieve aggressive goals in these areas.

For the most part, they are innovation-driven organizations, and that is reflected in their approach to data center design and operation. To be successful, suppliers must exhibit a high degree of expertise and agility when supporting this market.

The other thing that is important is being able to support their goal of achieving consistency in how data centers are designed and operated globally. Globally standardized solutions and global service capabilities have become more important as these organizations have continued to expand.

NEXT: Trends in rack power density and the impact of  AI hardware. 

Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below:

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Courtesy of AFL

Hyperscale Rising

Alan Keizer and Keith Sullivan of AFL explore the growth and evolution of hyperscale computing from being a nice-to-have to a must-have.

White Papers


Choosing the Right Technology for Diesel Backup Generators

July 26, 2023
Environmental and long-term sustainability concerns are increasingly influencing our technology decisions, and that’s driving change in the market. Gone are the days of simple...