We’ve written previously about how economizer mode can help you save on data center cooling costs by incorporating cool outside air, and had a podcast that noted the technology works even in hotter climates like Las Vegas.
The Las Vegas example should be evidence that economizer technology just keeps improving, so much so that experts are now advising that economizer be the primary mode of operation for data center cooling infrastructure. Schneider Electric argues in a recent white paper that if you adhere to certain design principles, you can reduce energy consumption by 50% by using economizer mode cooling.
Schneider Electric has identified five design principles that are key to delivering those savings.
1. Economizer mode is used as the primary mode of operation. Using technologies such as evaporative cooling and air-to-air heat exchangers enables economizer mode to be used even with warmer outdoor temperatures. Evaporative cooling involves spraying water over the outside of a heat exchanger carrying ambient outside air, which further cools the air. And air-to-air exchangers likewise enable economizers to be effective with higher outdoor temperatures. In fact, using air-to-air exchanges enable data centers to use economizer mode with outdoor temperatures 16 degrees higher than when using systems that rely on more heat exchanges (i.e.: cooling tower to plate and frame heat exchanger, to air handler). In a city such as St. Louis, Mo., that would mean economizer mode could be used an additional 23% of the year.
2. Protect indoor data center air from outdoor pollutants and excessive humidity fluctuations. The cooling methods described above involve indirect cooling of outdoor air, which isolates those outdoor pollutants and protects the data center air from rapid swings in temperature and humidity.
3. Minimize onsite construction time and programming. Customers today can get cooling plant with integrated pre-programmed controls that come in a self-contained system, both of which significantly reduce onsite construction requirements and programming of the cooling plant. Schneider Electric’s EcoBreeze is an example of such a self-contained system.
4. Cooling capacity is scalable. Traditionally companies would build data centers and install all the cooling capacity they may ever need from the get-go. But, as we’ve pointed out previously[PD1] , that approach winds up wasting an awful lot of power and cooling capacity. It’s far better to add cooling capacity as increasing IT loads warrant. But you’ve also got to be able to do it with little to no interruption to data center availability. The use of “hot swap” cooling modules to scale cooling capacity is one way to scale the plant without interruption to the IT load – an approach known as device modularity. Another approach, known as subsystem modularity, is to partition the IT space into zones and add new cooling plants as zones get populated.
5. Maintenance does not interrupt IT operations. Reliability is crucial in a data center, and that goes for the cooling system as well as the servers. Achieving redundancy in cooling can be done in two primary ways: using internally redundant cooling modules within the system or the use of multiple systems. Both approaches can eliminate single points of failure, creating a fault tolerant design.
To learn more about how to create a cooling system that relies primarily on economizer mode, read the Schneider Electric white paper, “High Efficiency Economizer-based Cooling Modules for Large Data Centers.”