Running data centers at elevated air temperatures cuts energy costs by reducing energy for cooling, but only recently has the industry started to get comfortable with the practice of running data centers a bit warmer than in the past.
Much of the credit for this comfort level goes to the fairly recent effort by a committee of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) to update its thermal guidelines for data centers. ASHRAE’s Technical Committee (TC) 9.9 laid out the new guidelines in a white paper—“2011 Thermal Guidelines for Data Processing Environments—Expanded Data Center Classes and Usage Guidelines”—and followed that up with a third edition book about thermal guidelines. The committee’s website has further information, and a link to the book purchase page can be found here.
The guidelines include a breakdown of data centers into four classes and two tiers for temperature management, and plenty of detailed discussion and analysis, so they are well worth some study. Rather than trying to summarize all of that information, here are a few select points to keep in mind:
- The new guidance (updating previous guidelines from 2004 and 2008) generally allow for a broader recommended thermal operating envelope (temperature and humidity) compared to past guidelines. By running data centers a few degrees warmer than in the past, energy consumption for cooling infrastructure is reduced. The server fans may need to work a bit harder, but the savings related to reduced cooling more than make up for this within the recommended ranges.
- The performance risk to the information technology (IT) assets is primarily from rapid temperature changes, not from a marginally higher set point. As long as the temperature is within the recommended range and stays consistent, performance should not be an issue.
- Newer server hardware is capable of running reliably and efficiently at higher temperatures, as long as the temperature stays consistent. Fan power consumption in servers, for example, is much more efficient today than 10 or 15 years ago.
- Within the recommended ranges, there is a “sweet spot” to hit in finding the optimal balance between reduced energy consumption for cooling, and potentially making the IT equipment work harder. This sweet spot tends to vary depending on the configuration and assets in a particular data center. For a good analysis on this issue, refer to Schneider Electric white paper 138, “Energy Impact of Increased Server Inlet Temperature.”
Overall, perhaps the biggest impact of the updated TC 9.9 thermal guidelines is that data center managers now have an authoritative set of recommendations to back up a decision to raise temperatures marginally. This raises the comfort level of data centers managers who know that slightly higher temperatures make economic sense given today’s server hardware, but were hesitant to make the decision without solid guidance.
For data center considering implementing elevated inlet air temperatures in accordance with the guidelines, it’s important to think about how to keep the new set point consistent. This typically requires multiple factors to be assessed, such as proper air flow management, configuration of hot aisle and cold aisle configurations, including placement of cooling units, possible upgrades to containment to reduce air mixing, as well as placement of temperature sensors within the data center.
Consistency is generally a very good thing when it comes to data centers, and it’s a vitally important goal when data center operators start thinking about raising the target temperature. An audit could help a facility prepare for such a change so that the new set point stays consistent and the facility is not left struggling with air mixing and temperature shifts that could threaten IT equipment performance or erode the expected energy savings.