SAN JOSE, Calif – The hyperscale data center of the future will run on 48 volt DC power, according to Google, which unveiled the custom design powering its servers and joined the Open Compute Project to evangelize this vision to the world.
“To get the efficiency in cost and power, you have to feed 48 volt to the motherboard and convert only once,” said Urz Hölzle, VP of technical infrastructure at Google. “This is something we’ve deployed at scale. This isn’t an experimental system.”
Hoelzle said Google is already working with Facebook on a rack design that it will contribute to the Open Compute Project, and which it hopes will establish a standard for the use of 48V power in large data centers. “We want the whole industry to be able to use it,” said Hölzle.
The Open Compute Project now boasts an extraordinary alignment of the largest players in the hyperscale data center industry, with Google, Facebook and Microsoft all on board. The only major holdout is Amazon Web Services, which has made only limited disclosures about the hardware technology powering its cloud computing platform.
Targeting the Power Chain
Google has a long history of innovation in data center power distribution, including the use of an on-board battery for its servers, which eliminates a centralized UPS system. By simplifying the path of electricity from the power grid to the server, Goolge has eliminated steps that can waste power, including conversions from AC and DC and stepping down to lower voltages. In bringing higher voltage to the motherboard, Google is taking an additional step to streamline this process.
Hölzle said the move can reduce energy losses by as much as 30 percent from those seen with the use of traditional 12 volt power to the motherboard. He said Google sometimes deploys 12V servers in the 48V racks, but then has to add a Dc-to-DC conversion at the tray level.
It’s not a revolutionary idea, as 48V power has been widely used in the telecom industry for years. The data center industry has been discussing the benefits of bringing higher voltages to the rack for many years, but the active debate about voltage options, deployment challenges and safety issues has prevented any consensus around a new approach.
Google did what it has done for more than a decade – tested the concept in its own infrastructure, and then built it themselves and deploy it at scale.
A Big Vote for Open Compute
Google is also contributing a separate design for a shallower rack. While servers had standards for width and height, their depth has varied widely across different servers and vendors. Many Open Compute servers are deeper than those that Google uses.
“This is something we need,” said Hölzle. “We cannot currently deploy Open Compute racks in our data centers. It’s likely that server designs can be configured for both form factors.”
Hölzle rejects the idea that Google has been secretive with its in-house designs, noting that it has often shared designs through its data center efficiency summits and blog posts. He said that the decision to join Open Compute extends that commitment and creates additional scale for open hardware solutions.
“We think this will be something that saves users money,” said Hölzle.