We’ve written previously about how operating in economizer mode can help you save on data center cooling costs, and had a podcast that noted the technology works even in hot climates like Las Vegas.
Since mechanical cooling systems consume so much energy when running, it only makes sense to turn them off when possible, and run on economizer. When a system’s economizer mode becomes the primary mode of operation for a data center, huge PUE improvements result (on the order of 50%).
In our recent white paper, we present five cooling design principles that, combined, not only reduce energy consumption, but also improve predictability and flexibility. Here are the highlights of these principles.
1. Economizer mode is primary mode of operation. One approach to increasing economizer mode operation is with an air-to-air heat exchanger with evaporative cooling. Evaporative cooling involves spraying water over the outside of a heat exchanger to lower the outside air temperature. Air-to-air exchangers are effective with higher outdoor temperatures because there is only one heat exchange taking place, vs. a chiller/cooling tower design that has three heat exchanges (the cooling tower, the plate and frame heat exchanger, and the air handler). An example comparison shows how a system with an air-to-air heat exchanger can operate in economizer mode at outdoor temperatures 16 degrees higher than the traditional design with three heat exchanges. In a city such as St. Louis, Mo. that would mean economizer mode could be used an additional 23% of the year.
2. Indoor data center air is protected from outdoor pollutants and excessive humidity fluctuations. Indirect cooling of outdoor air, such as with the air-to-air heat exchanger mentioned above, isolates those outdoor pollutants and protects the data center air from rapid swings in temperature and humidity.
3. Onsite construction time and programming is minimized. Customers today can get a cooling plant with integrated pre-programmed controls in a self-contained system, which significantly reduces onsite construction requirements and programming of the cooling plant. It also ensures reliable, repeatable, and efficient operation. Schneider Electric’s EcoBreeze is an example of such a self-contained system.
4. Cooling capacity is scalable in a live data center. With the dynamic nature of today’s data centers, ability to scale is critical. As we’ve pointed out previously, there are huge TCO gains when this can be done. But you’ve also got to be able to do it with little to no interruption to the operating data center. The use of “hot swap” cooling modules to scale cooling capacity (similar to a modular UPS with common backplane and plug-in modules) is one way to scale the plant without interruption to the IT load. You can also scale a facility by partitioning the IT space into zones and add new cooling plants as zones get populated.
5. Maintenance does not interrupt IT operations. Reliability is crucial in a data center, and redundancy can ensure systems do not go down during maintenance. Achieving redundancy in cooling can be done in two primary ways: using internally redundant cooling modules within the system or the use of multiple systems. Both approaches can eliminate single points of failure, creating a fault tolerant design.
To learn more about a cooling system that meets these five design principles, read our Schneider Electric white paper, “High Efficiency Economizer-based Cooling Modules for Large Data Centers.” We also have a TradeOff Tool – Cooling Economizer Mode PUE Calculator – that compares common cooling architectures to show you how they perform (PUE, economizer mode hours) in different geographies and under different operating conditions.