The Impact of Energy Star Ratings on Data Center IT Equipment

March 28, 2016
In 2016 the Energy Star program will expanded beyond servers to include storage systems and the large network equipment for data centers. While not all IT equipment will be Energy Star rated, it represents an energy cost reduction factor to consider when making IT purchasing decisions.

In the past, most IT equipment power loads did not decrease much even when idle, representing a huge direct waste of IT energy, as well as creating a heat load that required (and wasted) cooling system energy. In 2009 the EPA released the first Energy Star for Servers program, which defined a series of energy usage and efficiency requirements. These requirements focused on increasing overall server energy efficiency, lowering overall power, and especially the power drawn while at idle.

The Energy Star program now also includes storage systems and the large network equipment specification is expected to be finalized in 2016. While not all IT equipment is Energy Star rated, it does represent an energy cost reduction factor to be considered when making a purchasing decision for IT buyers.

Click to Download the Data Center Frontier Special Report on Data Center Cooling

This article is the forth in a series on data center cooling taken from the Data Center Frontier Special Report on Data Center Cooling Standards (Getting Ready for Revisions to the ASHRAE Standard)

Fan Power and Noise Levels versus Intake Temperature

While operating temperatures and energy efficiency have been the primary focus of the industry, noise in the data center is a long standing and growing issue. In particular, it is the high-density IT equipment such as high power 1U servers and blade servers that are the primary source of the increased noise due to the higher airflow requirements.

While new IT equipment energy efficiency considerations have minimized the fan speeds at low intake temperatures and CPU loads, the internal thermal management systems will still increase the fan to speed up significantly as Intake temperatures and CPU loads increase. This can significantly increase the amount of power the IT fans use as well as the level of noise.

ASHRAE charts show that for A2 servers increasing intake temperatures from 59°F to 95°F the airflow requirements could increase up to 250%, while creating a similar increase in fan noise. This could result in server power increases of up to 20% (note these are the maximum projections, and are an anonymized composite of vendor data that range from 7-20%, so check with your IT equipment manufacturer for specific performance).

ASHRAE has been aware of the rising noise issues for a many years, and cites that fan laws generally predict that the sound power level of an air-moving device increases with the fifth power of rotational speed. Fan noise, energy, airflow and the related fan affinity laws can be quite complex engineering studies. However, without delving into too many technical calculations, the example provided in the 2012 guidelines postulates that a 3.6°F (2°C) increase in IT intake temperature (to save cooling system energy) would result in an estimated 20% increase in speed (e.g., 3000 to 3600 rpm). This would equate to a 4 dB increase in fan noise and that it would not be unreasonable to expect to see increases in the range of 3 to 5 dB (if temperatures are raised from 68 F to 72F – still within the “recommended” range).

Optimizing Every Component

While the example cited by ASHRAE uses 3600 rpm fans, in many cases, high-density 1U servers (which have 500-1000 watt power supplies), use very small fans which can run at up to 15,000 rpm, in order provide enough airflow at full load and high intake temperatures. They can produce high noise levels (at much higher frequencies), than the larger fans utilized in bigger server chassis and bladeservers, which also creates less noise and use less energy to deliver the necessary airflow.

Lower IT fan speeds improve energy efficiency in several ways (as well as reducing fan noise). In addition to directly lowering the fan energy of the IT server, indirectly it also allows the facility side cooling (CRAC/CRAH) to lower their CFM delivery requirements, thus also lowering facility fan energy.

In order to meet Energy Star for Data Center IT Equipment requirements, every component and energy management system is energy optimized. This allows the system to idle at very low power levels (CPU, memory, etc.), as a result, fans will idle down to minimum speed whenever possible, but will also ramp-up quickly as intake temperature rises. This fan energy management function has now become fairly standard in most new servers and its effect on power consumption and airflow requirements can be seen in the ASHRAE airflow curves for A2 servers.

Source: ASHRAE TC 9.9 whitepaper “Thermal Guidelines for Data Processing Environments – Expanded Data Center – Classes and Usage Guidance”

Next week we will explore the impact of the ASHRAE 90.1 and 90.4 standards. If you prefer you can download the Data Center Frontier Special Report on Data Center Cooling Standards in PDF format from the Data Center Frontier White Paper Library courtesy of Compass Data Centers. Click here for a copy of the report.

About the Author

Julius Neudorfer

Julius Neudorfer is the CTO and founder of North American Access Technologies, Inc. (NAAT). NAAT has been designing and implementing Data Center Infrastructure and related technology projects for over 25 years. He also developed and holds a patent for high-density cooling. Julius is a member of AFCOM, ASHRAE, IEEE and The Green Grid. Julius has written numerous articles and whitepapers for various IT and Data Center publications and has delivered seminars and webinars on data center power, cooling and energy efficiency.

Sponsored Recommendations

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

How Modernizing Aging Data Center Infrastructure Improves Sustainability

Explore the path to improved sustainability in data centers by modernizing aging infrastructure, uncovering challenges, three effective approaches, and specific examples outlined...

How Modern DCIM Helps Multi-Tenant Colocation Data Centers Be More Competitive

Discover the transformative impact of modern DCIM software on multi-tenant colocation data centers, enhancing competitiveness through improved resiliency, security, environmental...

Image created by DALL-E 3, courtesy of EdgeConneX

How Edge Compute is Shifting in the AI Era: A Vision of the Future

Kevin Imboden, Global Director, Market Research, and Intelligence for EdgeConneX, explores what edge deployment architecture might look like when AI models are in widespread production...

White Papers

Get the full report.

From Cloud-Native Applications to Composable Infrastructure: 5 New Realities at the Edge

July 8, 2022
DartPoints outlines five new realities at the edge from decentralization to cloud-native applications and composable infrastructure.