Intel IT conducted a technology evaluation of Intel Power Node Manager and Intel® Data Center Manager (Intel® DCM). Our goals were to assess the potential of these Intel® power management technologies to increase data center energy efficiency, and to validate potential usage models.
We conducted our evaluation in a test environment representing a virtualized data center, using servers based on Intel® Xeon® processor 5500 series.
We successfully used Intel Power Node Manager and Intel DCM to monitor and cap power consumption across individual servers and groups of servers. For workloads that were not processor-intensive, we optimized server power consumption by up to approximately 20 percent without impacting performance, as shown in Figure 1.
Power monitoring is a critical capability that enables us to characterize workloads and identify opportunities to increase data center energy efficiency. Our evaluation showed that Intel power management technologies can address key data center power and cooling challenges, helping to increase computing capacity, reduce power consumption, and maintain business continuity.
Intel IT, like other organizations, faces significant data center power and cooling challenges. Rapid growth in demand drives a continual need for more computing resources. This is straining the limits of data center power and cooling capacity. At the same time, power and cooling costs are becoming an increasingly important component of total cost of ownership (TCO).
Ways to accommodate the increasing demand include building new data centers or adding power and cooling capacity to existing data centers. However, both of these options are extremely expensive and take a long time to complete.
Because of this, we are increasingly applying alternative approaches that focus on using existing data center power more efficiently in order to increase computing capacity, cut power costs, and reduce Intel’s carbon footprint.
Traditionally, because IT organizations have lacked detailed information about actual server power consumption in everyday use, data center computing capacity has been based on nameplate power, peak server power consumption, or derated power. In practice, server power consumption with real data center workloads is nearly always lower than this. This situation results in overprovisioned data center power, overcooling of IT equipment, and increased TCO.