A number of important trends are helping companies save lots of money on electric bills by making their data centers more efficient. While this is certainly a worthy and justified endeavor, it does not come without risk – namely, the risk of trouble should the power go out.
IT equipment is typically backed up by uninterruptible power supplies (UPSs) which supply power until generators come on-line following the loss of utility power. Cooling system components, however, are not typically connected to a UPS; some may not even be connected to the backup generators. The result is the air temperature in the data center may rise quickly following a power failure.
The issue is compounded by a number of data center trends and best practices aimed at improving performance, efficiency, and manageability under normal operating conditions. These trends and practices include:
- Right-sizing cooling capacity
- Increasing power density and virtualization
- Increasing IT inlet and chiller set-point temperatures
- Air containment of racks and rows
1. Right-sizing cooling capacity
Right-sizing is the idea of aligning cooling capacity to the actual IT load, a practice that provides several benefits including increased energy efficiency and lower capital costs. (See white paper 114, “Implementing Energy Efficient Data Centers” for more information.) Following a power outage, however, having excess cooling capacity is a good thing – it can help you ride through till the power comes back.
The extra capacity will also help you get to the desired temperature more quickly once the power comes back, just as multiple window air-conditioners will cool a bedroom more quickly than a single unit. If the total cooling capacity perfectly matches the heat load, with absolutely no excess capacity, in theory you’ll never get the facility cooled to its original state because after a power outage you’ll have heat in excess of the IT load.
2. Increasing power density and virtualization
IT equipment is becoming more compact, thanks to the advent of equipment such as blade servers and multi-function communications equipment. These days, it’s not unusual to see rack power densities exceeding 40 kW/rack.
Virtualization exacerbates the issue, by dramatically driving up the CPU utilization rate of servers – from 10% to 50% or more as compared to non-virtualized servers. As the CPU utilization rate goes up, so does the amount of power a server consumes and the heat it produces (a topic we’ve touched on previously).
Because both issues make it possible to generate more heat in a given space, they can also reduce the time available to data center operators before the IT inlet temperatures reach critical levels following a power outage.
3. Increasing IT inlet and chiller set point temperatures
As we’ve reported previously, ASHRAE not long ago came out with new guidance about the acceptable operating temperature for data center, pushing the allowable temperature up a few degrees.
That’s great because it cuts cooling costs. It’s been estimated that for every 1.8°F (1°C) increase in chiller set point temperature, you can save about 3.5% of the chiller power. What’s more, increasing the IT inlet and chilled water set point temperature results in an increased number of hours that cooling systems can operate on economizer mode.
The down side, of course, is that higher data center temperatures leave less wiggle room for data center operators after a power-failure scenario. It’s going to get real hot real fast.
4. Air containment of racks and rows
Containment systems likewise improve the efficiency of traditional data center cooling systems. But the very attribute that contributes to their efficiency – the idea that they prevent air streams surrounding racks from mixing with the rest of the data center air – doesn’t necessarily work so well after a power outage.
Consider a hot-aisle containment system with row-based chilled water cooling. If the coolers are not on UPS and containment doors remain shut during a loss of power, lots of hot air could get into the IT inlets through various leakage paths. As a result, IT inlet temperatures will quickly rise. If coolers are on UPS, but the chilled water pumps are not, then the coolers will pump hot air into the cold aisle without providing active cooling.
Or think about a cold-aisle containment system with row-based chilled-water coolers. If the coolers are not on UPS, the negative pressure in the containment system will draw in hot exhaust through the rack and containment structure leakage paths, thus raising IT inlet temperatures.
In my next post, I’ll look at solutions to each of these issues. Or you can take a deep dive on the topic by reading white paper no. 179, “Data Center Temperature Rise During a Cooling System Outage.”