Aside from IT consolidation, the biggest opportunity for energy savings comes from the cooling plant, which, in many data centers consumes the same as, or even more than, the IT loads. One key to reducing cooling plant energy is to operate in economizer mode whenever possible so that high-energy-consuming mechanical cooling systems such as compressors and chillers can be turned off, whilst outdoor air is used to cool the data center.
Using the outside air indirectly is acceptable to data center operators today. But to reduce cooling energy consumption by 50%, and still maintaining the flexibility and scalability needed for large data centers, a self-contained cooling system should incorporate the following five design principles:
- Economizer mode as primary mode of operation
Maximize economizer mode operation by reducing the number of heat exchanges to one with an air-to-air heat exchanger and by incorporating evaporative cooling. Alternatively, this design principle can be achieved with a fresh air (direct air) system which eliminates heat exchanges altogether.
- Protect indoor data center air from outdoor pollutants and excessive humidity fluctuations
Because the cooling method indirectly cools the air, outdoor pollutants and rapid swings in temperature and humidity are isolated from the IT space. Alternatively, high quality filters can be implemented in direct air systems to protect from outside contaminants and its control system can ensure the plant switches to backup cooling modes when weather changes occur beyond the data center’s limits. Other indirect cooling architectures can achieve this design principle, but not while maintaining economizer mode as its primary mode of operation.
- Minimize onsite construction time and programming
Cooling plant with integrated pre-programmed controls in a standardized self-contained system allows for onsite construction and programming of the cooling plant to be reduced significantly. It also ensures reliable, repeatable, and efficient operation. As the data center industry continues to shift towards standardized modules (containers), this design principle will be achieved by many systems.
- Cooling capacity is scalable in live data centers
With many data centers characterized by dynamic IT loads, it is critical that the cooling infrastructure can scale as the load scales. The use of “hot swap” cooling modules to scale cooling capacity similar to a modular UPS, is one way to scale the plant without interruption to the IT load. This is referred to as “device modularity”. Cooling plant can also be scaled at the subsystem level (referred to as “subsystem modularity”) by partitioning the IT space into “zones” and adding new cooling plants as zones get populated.
- Maintenance does not interrupt IT operations
Reliability is commonly on the forefront of data center operators’ minds. Redundancy can be achieved two primary ways to achieve this design principle: the use of internally redundant cooling modules within the system or the use of multiple systems. Both approaches can eliminate single points of failure, and create a fault tolerant design enabling concurrent maintainability.
As a footnote, ASHRAE TC9.9’s “2011 Thermal Guidelines for Data Processing Environments”, recommends a wider operating environment, and today IT vendors are specifying ever widening operating windows. The bigger the window, the greater the number of hours the cooling system can operate in economizer mode. The question for you is, how high are you prepared to go?