Most major cloud computing campuses have been closer to cows than cities, helping create data center building booms in rural areas of Oregon, Iowa and North Carolina. That’s starting to change, as cloud growth and new workloads push data storage closer to end users.
The result: More huge data centers are coming to the suburbs of major American cities, shifting servers closer to consumers in places like Dallas, Chicago, Atlanta and Phoenix.
The shift has been gradual thus far, but will gain momentum in coming years. It’s part of a gradual shift in architecture as cloud computing adapts to serving larger audiences and real-time applications.
This trend is driven by latency – the amount of time it takes data to move across the Internet to users, and a key factor in application performance. As cloud customers deploy new data-intensive applications, remote server farms are no longer enough.
“Latency is becoming problematic,” said Phill Lawson-Shanks, the Chief Innovation Oficer at EdgeConneX, which focuses on the edge computing market but also builds facilities for major cloud players. “It’s forcing a shift, with the hyperscalers now moving services closer to the users. The cloud guys built their data centers where power was cheap. Now latency is a huge issue.
“We’re now seeing regional hyperscale nodes of 20 to 60 megawatts in Ashburn and Chicago,” he continued. “Even if it’s a 60 megawatt cloud data center, if it’s serving that content locally, it’s an edge data center.”
Here’s a look at how this trend is impacting major data center markets, and the competitive landscape.
Latency Alters Cloud Architecture
Most hyperscale players have deployed data center capacity at cloud campuses, often in rural areas with easy access to cheap land and power.
These centralized data center hubs offer economies of scale, enabling companies to rapidly add server capacity and electric power as more workloads shift from in-house IT rooms into these massive server farms. It has become routine for companies like Apple, Google, Facebook and Microsoft to invest more than $1 billion in a single location where they place a cloud campus.
The growth of cloud campuses in the suburbs is part of a larger densification of America’s IT infrastructure, which will feature data centers in many new places. This digital transformation will create layers of distributed infrastructure, from the core to the edge of the network.
For cloud computing platforms, this means bigger facilities in historic data center hubs like Northern Virginia, Chicago and Silicon Valley. But it is also prompting historic data center building booms in markets like Dallas, Phoenix and Atlanta. In recent months, a flurry of hyperscale announcements indicate that new developers bringing projects to those markets may be rewarded.
The Build vs. Buy Equation Shifts
Major players like Google and Facebook and Microsoft have always stored some of their data near major cities, but usually in smaller volumes. As a result, they often elect to lease space from data center developers rather than build their own campuses. As cloud demand shifts workloads closer to population centers, larger data center footprints are needed in these markets.
That is a factor in recent land purchases by Microsoft in Silicon Valley and Google in Northern Virginia, markets where they have previously leased space. The message: The larger scale needed to provide low-latency services to these markets has altered the economics, making it more cost-effective for these tech titans to build rather than buy their data center capacity.
Microsoft also has a 30-megawatt build-to-suit project in suburban Chicago, according to Jim Kerrigan, Managing Director at North American Data Centers.
“They could build that in Iowa, but they didn’t,” said Kerrigan. “For the cloud guys, the edge is in Chicago.”
Building Booms Lift Key Markets
Lawson-Shanks says Phoenix and Dallas are two markets positioned to benefit from latency-driven cloud buildouts.
Developers’ plans for the Phoenix market reflect enormous optimism about future demand.
Phoenix is currently home to about 210 megawatts of commissioned data center space. Data center developers have 707 MWs of capacity on the planning board, more than any market except Northern Virginia. Much of this capacity will be built over time, as demand evolves and buildings are pre-leased.
The surging demand in Phoenix has also attracted new players, and prompted expansions by existing providers. Iron Mountain, EdgeCore, CyrusOne, QTS Data Centers, Digital Realty, EdgeConneX and Aligned Data Centers have all announced new campuses or expansions of existing properties.
As more players look for land to serve Phoenix, a new sub-market has emerged in Mesa, which has seen an influx of new projects. (For more, see the DCF Phoenix Data Center Market Report, a free download).
The trend is also shaping up as good news for data centers in Dallas, where nearly all the major wholesale players are building large campuses in the city’s northern suburbs. The Dallas-Fort Worth metroplex hasn’t historically been a huge hyperscale market, with growth being driven by enterprise demand. Some insiders say that hyperscale companies are seeking larger requirements, a contention borne out in last week’s announcement that Google has purchased 375 acres in Midlothian, Texas for a new data center.
Another notable beneficiary of this trend is Atlanta, the largest population center in the Southeast U.S., which has seen an influx of huge data center projects. The Atlanta market has a modest absorption of colocation space – typically between 6 and 10 megawatts a year. But some of the industry’s largest players expect Atlanta to emerge as a large market for wholesale data center space.
Those hopes were confirmed when Facebook announced plans to build a huge cloud campus in in Newton County, an area east of Atlanta.
Switch is planning The Keep, a 1 million square foot data center in Douglas County, where CyrusOne has unveiled plans for a 50 megawatt project.
“It’s not hard to imagine Atlanta following in the footsteps of Northern Virginia,” said Kevin Timmons, CTO of CyrusOne.
An Alternative Solution: Undersea Data Centers
To understand the importance of latency and proximity to users, you need only check out a livecam on the web site of Microsoft’s Project Natick, which shows fish swimming past the exterior of a data center submerged off the coast of Scotland, 117 feet below the ocean’s surface.
“Moving data centers to the ocean made a great amount of sense to be able to make the cable to our customers as short as possible,” said Microsoft Research Engineer Jeff Kramer. “Natick could have a lot of impact, both currently and into the future.”
These pods of modules could be aggregated to create a server farm of up to 20 megawatts or more, said Ben Cutler, Microsoft’s Project Manager with the Natick team, which could then provide low-latency access to large populations of cloud users living close to the shoreline.
“We are a coastal society,” said Cutler. “We like the ocean. By deploying these in the ocean, we can reach more people more easily than the land-based data centers of today.”
AI Brings Real-Time Data Crunching
What does this shift to edge campuses in the suburbs look like on dry land?
For Facebook, it involves a broader distribution of the company’s growing armada of servers using GPUs (graphical processing units) to crunch data for artificial intelligence. The company initially concentrated its GPU compute pools in a single data center.
That changed as Facebook began to infuse its products and services with AI, using its compute power to boost its ability personalize its newsfeed and ads on the fly.
“Due to the increased adoption of Deep Learning across multiple products, including ranking, recommendation, and content understanding, locality between the GPU compute and big data increased in importance,” Facebook engineers wrote in a research paper.
This pattern will repeat itself in the data center networks of companies building AI into their operations. Many end-user benefits of AI involve decisions made in real-time, requiring compute power close at hand. Some of this data-crunching horsepower will reside on devices or edge data centers. A notable example is the autonomous vehicle, with significant on-board processing power. But other applications will require larger pools of storage and compute capacity, accessible over a low-latency connection.
New Opportunities for Familiar Names
As the cloud moves into the suburbs, it has expanded the opportunities for data center companies, including several providers who previously had not targeted the hyperscale sector.
One example is EdgeConneX, an edge computing specialist known for rapidly creating a national network of unmanned data centers in regional markets, helping solve the “Netflix problem” by caching content to reduce massive flows of network traffic.
The company’s construction prowess caught the attention of a hyperscale provider, and the EdgeConneX has since built large data centers in the suburbs of Frankfurt, Amsterdam, Dublin and Chicago.
The trend has also been one of the factors in a change in strategy for Equinix, the leading player in colocation and interconnection in major Internet markets.
Equinix has created a hyperscale infrastructure division, which is creating a tier of cloud-centric deployments, in which hyperscale companies locate 3 to 5 megawatt deployments in interconnection-focused markets.
“We’re hearing those hyperscalers say ‘Look, we really would like you to step up and provide some of those elements of our architecture in key locations for us as in our partner of choice, as an infrastructure partner of choice for us,’ ” said Meyers. “The response from those customers has been extremely positive.”