Assessing the Impact of Virtualization on Data Center Power

This audio was created using Microsoft Azure Speech Services

Virtualization brings many benefits to a data center, as fewer servers means less power consumed, less heat generated and more available space.

But virtualization also presents many challenges to a data center’s power and cooling system. Before embarking on a virtualization project, data center professionals should understand virtualization’s effect on power consumption and efficiency.

While virtualization results in lower power consumption, it also always reduces the efficiency of a data center’s physical infrastructure in cases where the power and cooling systems aren’t adjusted for the lower IT loads.

The reason is obvious: When the power and cooling systems run at higher fixed levels than necessary to support the IT load, energy is wasted, thus making the physical infrastructure inefficient.

This inefficiency can be measured using a metric known as Power Usage Effectiveness (PUE).

PUE, which quantifies the “useful work” performed by a data center’s physical infrastructure, is defined in an APC by Schneider Electric white paper as the proportion of total data center input power used to run the IT load.

Given that the entire point of a data center is the IT operation, one can consider all uses of power in the data center not consumed by the IT load to be lost or wasted energy.

Non-IT power consumption, or “loss,” includes:

  • Internal inefficiencies of the power system, which are manifested as heat
  • Power consumed by the cooling system
  • Power consumed by other data center physical infrastructure equipment (light switches, physical security, generator)

The latter item is responsible for only a small percentage of data center power consumption loss.

The cooling system (CRACs, pumps, chillers, fans) is responsible for most of the power consumption loss in a data center. In fact, the cooling system consumes more power than even the IT load.

Increased rack density

As explained above, virtualization can reduce data center power consumption while also reducing energy efficiency if the power and cooling system are not adjusted to the new IT load.

It also can result in increased rack density, which can be addressed by installing row-based cooling in the data center. Row-based cooling relies on proximity and instrumentation to sense and respond to server temperature changes. By providing cooling where and when it’s needed, and in the proper amount, row-based cooling can dramatically improve data center physical infrastructure efficiency and therefore reduce costly “loss.”

Moving “hot spots”

Virtualization allows applications to start and stop dynamically, meaning that IT loads, or “hot spots,” can change over time and server location. Again, row-based cooling will help a data center meet the challenge of changing power densities.

Pace of change

Virtualization introduces dynamic change into the data center. But change that is unmanaged can lead to inefficiency, chaos and possibly data center downtime. It is essential that the data center remain stable and in operation.

Capacity management, the real-time monitoring and analysis of power, cooling and space, is essential to optimal performance of a virtualized data center.

An effective capacity management system monitors the power, cooling and physical space availability at the room, row, rack and server levels, using automated intelligence and modeling to optimize the use of available resources. In essence, the capacity management system is the “conductor” of the virtualized data center.

For more information on virtualization in the data center, read this APC by Schneider white paper, Virtualization: Optimized Power and Cooling to Maximize Benefits.

Tags: ,