This audio was created using Microsoft Azure Speech Services
As companies continue to embrace virtualization technology and consolidate their server farms, they may find the technology brings an unexpected, and unwelcome, guest to their data centers: hot spots.
This may come as a surprise given that companies are using virtualization to reduce the number of servers in their data centers, by ratios of 10:1, 20:1 or even higher. If you’re reducing the number of servers in your data center, you should also be decreasing the overall IT load – and thus need less total cooling capacity, not more, right?
Right, but a host server tends to draw more power than a traditional non-virtualized server. As a server is loaded up with more and more virtual machines, its CPU utilization increases. Whereas CPU utilization for a typical non-virtual server is around 5% to 10%, for virtualized servers the figure could be 50% or higher. A server with a 50% utilization rate will draw about 20% more power than one at 5% utilization.
What’s more, hosts often require increased processor and memory resources, which can further raise their individual power requirements.
These bulked up servers tend to get installed and grouped together, creating localized high-density areas. This, of course, can lead to hot spots. The existing air distribution system may not be able to cope or respond sufficiently. This, of course, can then lead to unexpected shutdowns.
Several strategies exist for cooling high-density racks. The two main ones are to simply spread out high-density equipment or to create an isolated high-density pod within the data center with its own dedicated cooling air distribution and containment system.
The idea behind spreading out the high-density loads is to ensure no single rack exceeds the design power density, which will make cooling performance more predictable. The big benefit of this strategy is that you won’t need any new power or cooling infrastructure.
The strategy is not without potential disadvantages, however. They include increased floor space consumption and higher cabling costs. It’s also possible that you’ll see reduced electrical efficiency if air paths aren’t contained. And management typically doesn’t like to see half-filled racks either. Implementing this strategy also means you need to have complete control over where any individual server is placed.
Creating a high-density pod is often a more efficient way to deal with virtualized servers. This involves consolidating all high-density systems down to a single rack or row(s) of racks. Then you bring in a dedicated cooling system or air distribution, such as by using close-coupled, rack or row-based air conditioners. In addition to dedicated cooling, the pod would ideally employ an air containment system . A pod that combines shorter air paths, contained air streams, and variable frequency drive fans best ensures highly virtualized servers and other high density gear are efficiently cooled without risk of creating hot spots.
The advantages of the high-density pod approach include better space utilization and higher efficiencies in addition to maximizing rack density. It’s also possible, if not likely, that the pod would actually add cooling capacity to the rest of the data center. This is particularly likely if hot and cold air streams are well contained.
For a more detailed look at high-density pods, see APC by Schneider Electric white paper number 134, “Deploying High Density Pods in a Low Density Data Center.” To learn more about how virtualization impacts the physical infrastructure, see white paper number 118, “Virtualization and Cloud Computing: Optimized Power, Cooling, and Management Maximizes Benefits.”