This audio was created using Microsoft Azure Speech Services
If data center efficiency was ever off the map, it’s certainly back on it now. More than ever, companies across the board are being pushed to be “green” not just to save money, but to be good corporate citizens and live up to new expectations.
However, that effort is at odds with a stark reality: data centers are woefully underutilized with respect to compute power. A recent New York Times story on data centers quoted the former CTO of Viridity Software (now part of Schneider Electric) as saying that in a sample of 333 servers in one data center, nearly 75% of them were using less than 10% of their computational power. But even at 10% load, a server will still consume roughly 50% of its maximum energy load. And the net effect is actually even worse, because the entire data center infrastructure is similarly inefficient at low loads.
Virtualization is part of the answer, and most of our customers have begun – and in many cases completed – initiatives to consolidate servers and decommission old hardware. But even among the most mature data centers with respect to virtualization, more can be done.
One largely untapped way to increase efficiency is to take advantage of the abilities present in most virtualization software to dynamically shift loads from one physical server to another and dynamically provision hosts – that is to power on and shut down host servers to meet demand. Done correctly, this would enable a company to load up some servers such that they are highly utilized, then shut down servers that aren’t needed at any given time. And it would all happen automatically, based on the current need.
This capability isn’t new but to date data center operators have been wary of implementing it. The idea of virtual machines moving from host to host on their own, essentially, is too scary – operators worry that they won’t know where a given job is running at a particular point in time or that a VM might move to a host that is in some sense “unhealthy.”
This worry is not unfounded because the virtualization software is unaware of many aspects of data center operation. Take for instance the act of powering on a server, for which most data centers have a rather lengthy approval process. While this may seem cumbersome to some, the checks and balances are there for a reason: to ensure the availability of the data center. Before powering on a server, operators need to ensure the availability of sufficient power, cooling capacity and many more elements that are effectively unknown to the virtualization software.
A good but unfortunate example of this comes from a customer I know who was experimenting with dynamic virtualization allocations. In a small area of the production data center, this customer was allowing the virtualization software to switch servers on and off. At one point, when the virtualization software switched on several servers at once, there wasn’t enough cooling in the area, which caused all of the host servers to overheat and shut down.
Data Center Infrastructure Management (DCIM) software offers a solution to the problem. DCIM tools give data center operators information about the physical state of their servers and their surroundings, including power and cooling, maintenance windows, planned changes, and many more outside events that can impact a server.
Indeed, DCIM also makes it possible to calculate which servers would be most beneficial to consolidate and which hosts it makes the most sense to turn on or off. For example, using DCIM tools, you might find that if you shut off a group of servers that are close to one another, you can lower cooling demand in that area, saving even more energy.
When DCIM tools integrate with virtualization management software, it gives data center operators the information they need to implement the kind of automation that can provide real improvements in server utilization and energy efficiency – without the risk and the worry.