More and more companies are installing blade servers, in part to support their server virtualization efforts but also simply because blades are more efficient than traditional servers. But the trend promises to have a dramatic effect on the power consumed by the average data center rack, and the corresponding cooling required.
The average power consumed by an enclosure in a data center is about 1.7 kW, but the maximum power that can be obtained by filling a rack with high density servers, such as blade servers, is over 20 kW – more than 10 times the current average per rack. Such loads greatly exceed the power and cooling design capabilities of the typical data center. Indeed, data center operators have very little experience with enclosures drawing over 10 kW.
While the simple answer would be to provision 20 kW of redundant power and cooling to every enclosure, this is not technically feasible or economically practical in almost any case due to the limitations of air delivery and return systems, and the difficulty of providing redundancy and uninterrupted cooling.
There are, however, a few ways to deploy high-density computing in a data center with adequate power and cooling. Here we offer five strategies – four of which will actually work in practice.
1. Load spreading. As the name implies, this strategy involves spreading out 1U servers and blade servers across multiple racks, as opposed to installing them closely spaced in the same enclosure. The idea is to keep any single rack from exceeding the maximum rack power density for which the room cooling system is designed. Keep in mind that spreading equipment among multiple racks will leave significant vertical space unused and that space must be filled with blanking panels to prevent degradation of cooling performance. This is the most popular solution for incorporating high-density equipment into today’s data centers.
2. Rules-based borrowed cooling. Provide the room with the capability to power and cool to an average value below the peak enclosure value, and use rules to allow high-density racks to borrow adjacent underutilized cooling capacity. This approach takes advantage of the fact that some racks draw less power than the average design value, allowing the peak enclosure power density to exceed the average room cooling power by up to a factor of 3 if the cooling capacity of the adjacent enclosures is not utilized.
3. Supplemental cooling. Use supplemental cooling equipment as needed to cool racks with a density greater than the design average value for which the room cooling is configured. (For more information on supplemental cooling options, see the APC by Schneider Electric white paper, “Rack Air Distribution Architecture for Mission Critical Facilities.”)
4. Dedicated high-density areas. Provide a special area within the room that has high cooling capacity, and limit the location of high-density enclosures to that area. This approach requires prior knowledge of which racks will contain high-density enclosures, along with the ability to segregate those enclosures into a special area. Given those constraints, this option is not available to many users. When it is feasible, a high-density power and cooling system such as the APC by Schneider Electric InfraStruXure HD can be used for the cluster of high-density racks.
5. Whole-room cooling. Provide the room with the capability to power and cool any and every rack to the peak expected enclosure density. This may seem like the simplest solution but it is pretty much never implemented because data centers always have substantial variation in per-rack power; designing for the worst case is therefore wasteful and cost prohibitive.
In practice, you’ll likely use some combination of these methods to address all your high-density cooling requirements. To learn more about how to address the issue, read the APC by Schneider Electric white paper, “Cooling Strategies for Ultra-High Density Racks and Blade Servers.”