Data centers continue to be energy-intensive hubs, driven by the explosive growth in big data, digital content, e-commerce, social media networks, mobile networks, and cloud computing.
Traditionally, mainstream data centers are made up of hundreds, sometimes thousands, of racks of independent server machines, almost exclusively using 12 volt power. The challenge with this approach, however, is that there are inherent limits to what data centers can do with a finite set of resources. And while technologies have been implemented over time in an attempt to respond to increased levels of computing, the result is costly and complex systems with over-provisioned physical configurations and siloed management.
Open standards enable data center infrastructure to be reimagined to efficiently support the growing demands of next-generation computing.
The need for higher performance data and digital services is expected to continue to grow over the coming years, with global internet traffic predicted to reach 4.2 zettabytes per year (4.2 trillion gigabytes), and the number of mobile internet users projected to increase to five billion by 2025, while the number of Internet of Things (IoT) connections is expected to double from 12 billion to 25 billion during that time.
By providing an open platform, organizations are able to overcome the limitations of fixed resources in existing infrastructures and achieve true interoperability between cloud and traditional models. Not only does this enable the integration of heterogeneous systems, it also removes vendor-imposed boundaries by improving data exchange and interchange, to create a scalable yet sustainable infrastructure that meets continuous demand for more power and pressures to keep costs down.
To accommodate this move to an open compute platform, support the needs of the next generations of high performance processors and improve power efficiency, data centers are transitioning to 48 volt power distribution.
It takes a tremendous amount of energy to power data centers and the sector’s power consumption is only set to grow as demand for more data centers increases. Already, data centers use an estimated 200 terawatt hours (TWh) each year, to power critical IT systems as well as supplementary equipment, such as lights, cooling systems, monitors and humidifiers.
Moving rack power from 12 volt to 48 volt reduces the current draw for the same input power by a factor of four, which translates to 16 times lower distribution losses. In principle, this enables data centers to become more economical, flexible, easier to manage and easier to scale out on demand. This ultimately reduces the total cost of ownership through significantly better thermal performance, optimized efficiency and increased power density.
As organizations leverage big data to stay close to consumers, this shift is being seen not only in large data centers, but also in diverse markets where small-midscale enterprises, and even retailers, are migrating from the traditional approach of data management and storage to highly scalable edge systems. Consider a retailer with an IT footprint to equal any large enterprise, with servers located in thousands of stores and distribution centers across the country. This widespread footprint means that each store acts as a mini data center, with the retailer able to take advantage of open standards, such as Open Compute Project (OCP), adapted for the needs of edge computing rather than the core data center, to support the growing number of retail applications.
Brian Korn is the Vice President of Data Center Computing at Advanced Energy and brings broad experience in embedded power solutions for data center computing, hyperscale, telecom and network products. Contact Advanced Energy to learn more about how their OCP compliant platform is an enabler in bringing interoperability, enhanced reliability of compute and storage applications and energy savings to hyperscale, enterprise data centers and edge computing deployments.