Google: New Algorithm Will Make Our Cloud Platform Faster

July 20, 2017
Google Cloud Platform is implementing a cutting-edge algorithm to reduce network congestion, which it says will offer improved application performance for customers.

Google is wielding its expertise in network optimization in the cloud platform wars. The company is enhancing its Google Cloud Platform services with a cutting-edge algorithm to reduce network congestion, which should offer improved application and web site performance for customers.

Google developed the technology, known as BBR (short for “Bottleneck Bandwidth and Round-trip propagation time”), and says it has accelerated performance for its in-house properties. “Deploying BBR has resulted in higher throughput, lower latency, and better quality of experience across Google services, relative to the previous congestion control algorithm, CUBIC,” said Neal Cardwell, senior staff software engineer.

It’s the latest in a series of move by Google to boost the competitive position of Google Cloud Platform (GCP) in the high-stakes battle for cloud dominance. Earlier this week it unveiled Google Transfer Appliance, which can be used to physically transfer large volumes of data to the GCP cloud platform. Google’s chief rivals – Amazon Web Services, Microsoft and Oracle – continue to roll out new features as well.

Speed As a Competitive Metric

In seeking to differentiate its cloud, Google is leveraging its reputation for fast response time – which Google users experience first-hand every time they type a search query.

“At Google, our long-term goal is to make the Internet faster,” Cardwell writes in a blog post from the Google Cloud team announcing the BBR rollout. “Over the years, we’ve made changes to make TCP faster, and developed the Chrome web browser and the QUIC protocol. BBR is the next step.”

Google isn’t alone in this effort. All the major hyperscale players invest heavily in optimizing their networks. Amazon Web Services is developing custom semiconductors to accelerate its cloud network, fine-tuning chips to move data faster between its data centers. Facebook has built a dedicated network to manage the huge data flows of machine-to-machine (M2M) traffic between its facilities.

The major cloud builders are also interested in next-generation networking technologies like silicon photonics, which was among the demos at the Open Compute Project summit, which showcases new hardware for the hyperscale crowd.

Several Benefits for Customers

Early users of BBR on the Google Cloud say they have seen a difference in performance.

“BBR allows the 500,000 WordPress sites on our digital experience platform to load at lightning speed,” said Jason Cohen, Founder and CTO of WP Engine. “According to Google’s tests, BBR’s throughput can reach as much as 2,700x higher than today’s best loss-based congestion control; queueing delays can be 25x lower. Network innovations like BBR are just one of the many reasons we partner with GCP.”

Google says customers can automatically benefit from BBR in two ways:

  • Traffic Movement Within Google Cloud: First, when GCP customers talk to GCP services like Cloud Bigtable, Cloud Spanner, or Cloud Storage, the traffic from the GCP service to the application is sent using BBR. This means speedier access to your data.
  • Traffic Movement from Google Cloud to Internet users: When a GCP customer uses Google Cloud Load Balancing or Google Cloud CDN to serve and load balance traffic for their web site, the content is sent to users’ browsers using BBR. This means faster web page downloads for users of your site.

Google says implementing BBR improved YouTube network throughput by 4 percent on average globally, and by more than 14 percent in some countries. “These represent substantial improvements for all large user populations around the world, across both desktop and mobile users,” said Cardwell. “These results are particularly impressive because YouTube is already highly optimized; improving the experience for users watching video has long been an obsession here at Google.”

What Exactly is BBR?

Congestion control algorithms determine how fast a device should to send data, and run inside every computer, phone, or tablet. The Internet has largely used loss-based congestion control, relying on indications of lost packets as the signal to slow down.

“We need an algorithm that responds to actual congestion, rather than packet loss,” Cardwell writes. “BBR targets this with a ground-up rewrite of congestion control. We started from scratch, using a completely new paradigm: to decide how fast to send data over the network, BBR considers how fast the network is delivering data. For a given network connection, it uses recent measurements of the network’s delivery rate and round-trip time to build an explicit model that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay. BBR then uses this model to control both how fast it sends data and the maximum amount of data it is willing to allow in the network at any time.”

While Google is leveraging its leadership in web performance, it is playing catch-up in other areas, seeking to match existing features offered by other cloud platforms. A case in point: the Google Transfer Appliance, which follows last year’s announcement of Amazon Snowball.

Shipping Data to the Cloud

Both services address an ancient problem in computing – how to move huge amounts of data without clogging the network pipes. This problem was famously described by computer scientist Andrew Tanenbaum, who counseled to “never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”

Several form factors for the Google Transfer Appliance, which allows customers to ship large volumes of data to be ingested by the Google Cloud Platform. (Photo: Google)

“Working with customers, we’ve found that the typical enterprise has many petabytes of data, and available network bandwidth between 100 Mbps and 1 Gbps,” writes Ben Chong, Google’s product manager for Transfer Appliance. “Depending on the available bandwidth, transferring 10 PB of that data would take between three and 34 years — much too long. Sometimes the best way to move data is to ship it on physical media.”

Transfer Appliance slides into a standard 19-inch rack. With capacity of up to one-petabyte compressed, Transfer Appliance helps migrate data faster than over a typical network. The appliance encrypts customer data at capture, and isn’t decrypted until it reaches its final cloud destination.

“Like many organizations we talk to, you probably have large amounts of data that you want to use to train machine learning models,” said Chong. “”You have huge archives and backup libraries taking up expensive space in your data center. Or IoT devices flooding your storage arrays. There’s all this data waiting to get to the cloud, but it’s impeded by expensive, limited bandwidth. With Transfer Appliance, you can finally take advantage of all that GCP has to offer — machine learning, advanced analytics, content serving, archive and disaster recovery — without upgrading your network infrastructure or acquiring third-party data migration tools.”

About the Author

Rich Miller

I write about the places where the Internet lives, telling the story of data centers and the people who build them. I founded Data Center Knowledge, the data center industry's leading news site. Now I'm exploring the future of cloud computing at Data Center Frontier.

Sponsored Recommendations

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Conduit Sweeps and Elbows for Data Centers and Utilities

Data Centers and Utilities projects require a large number of electrical conduit sweeps and elbows. Learn why Champion Fiberglass is the best supplier for these projects.

Prefabricated Conduit Duct Banks Enable Smooth and Safe Electrical Installation for a Data Center

Prefabricated conduit duct banks encourage a smooth, safe electrical conduit installation for a data center.

sdecoret/iStock.com, courtesy of ark data centers
Source: sdecoret/iStock.com, courtesy of ark data centers

CMMC 2.0: Fueling Competitiveness with Compliance

John Kehoe, Chief Operating Officer at ark data centers, breaks down the hype around the U.S. Department of Defense Cybersecurity Maturity Model Certification (CMMC) 2.0.

White Papers

Get the full report

Enhancing Resiliency For the Energy Transition

Nov. 14, 2021
This white paper from Enchanted Rock explores how dual purpose microgrids can offer resiliency and stability to the grid at large.