Data Center

From the Data Center Trenches Blog: Virtualization and its effects on Data Center Physical Infrastructure

I’ve been attending some conferences recently where the benefits of virtualization have been extolled.  Server consolidation ratios of 10:1, 20:1, or even higher are occurring on a regular basis.  Certainly this is not a new topic and these benefits have gradually become well known throughout the data center industry with regards to IT infrastructure (servers, network, and storage).  In addition, it’s virtualization that is one of the engines behind cloud computing.  However, in all these discussions, there has been little talk about the effects this has on data center physical infrastructure (DCPI, not to be confused with DCIM), i.e., the power and cooling.

The 4 main things to keep in mind about virtualization with regard to the effects on DCPI are:

  1. High Density:  Virtualization by its very nature will lead to an increase in CPU utilization which will increase the per rack power draw.  CPU utilization can go from 10-20% pre-virtualization to >50% post virtualization.  The resulting server power draw does not increase linearly but will still increase around 20% depending on the manufacturer.  The most significant issue caused by high density is that of heat removal.  There are many cooling methodologies available today to address this but I won’t discuss that here.
  2. Increasing PUE:  When there is a reduction in IT load with no change in the DCPI, a data center’s PUE will get worse even though overall energy usage is decreasing.  In the case of virtualization, we have significantly increased the “IT efficiency” but have decreased the physical infrastructure efficiency.  This is due to the now oversized data center which in turn means fixed losses in the DCPI are playing a bigger role.  Fixed losses are basically power that is consumed by the power and cooling systems regardless of what the IT load is.  The more power and cooling capacity that exists, the more fixed losses will exist.  Again, there is much that can be done to address this with some solutions being more practical than others.
  3. Dynamic hot spots:  In combination with high density hot spots, virtualization can cause IT loads to vary in location and time, essentially creating dynamic hot spots.  One of the great benefits of virtualization is the ability to move VM’s (virtualized machines) as needed.  Imagine a rack going from a 3KW power draw to a 10KW power draw as VM’s are moved to this rack.  If not designed properly, downtime can result.  One potential way to address this is to design the power and cooling to handle the maximum feasible per rack power draw for each and every rack in the data center.  As noted above, this will lead to a poor PUE and excessive cost.  What are needed are DCPI systems that can respond dynamically and in synch with the IT load, especially the cooling.  DCIM (not to be confused with DCPI) software can also play a big role in the solution here.  Not only can DCIM software monitor and control DCPI based on changing IT loads, it can also interact with IT management systems in an intelligent way.  For example, DCIM software can notify a platform such as VMware that certain VM’s are being powered by a UPS that’s on battery or has a fault of some sort.  It can also tell VMware the physical locations to which the VM’s can be safely moved.  This can all happen in an automated fashion.
  4. Lower Redundancy Required:  While the 3 effects listed above could be seen as negative if not addressed properly, virtualization does create DCPI opportunities. In a data center that has a high level of IT fault-tolerance through virtualization there may be less of a need for redundancy in DCPI areas such as power and cooling.  The opportunity here may be to design a Tier 3 or Tier 3+ data center rather than a Tier 4 which will result in significant savings.

With everyone’s eyes on what virtualization and cloud computing can do for their IT, it’s easy to overlook the effects on DCPI.  Overlooking these effects can compromise availability and lead to lost dollars.

Next week I am going to give some suggestions on how these effects can be avoided or at least lessened.  If you are interested in learning more about these and other similar issues take a look at one of Schneider Electric’s newest white papers, white paper 118.

 

Please follow me on Twitter @DomenicAlcaro

 

About Domenic Alcaro:

Domenic Alcaro is Vice President of Enterprise Sales for Schneider Electric’s Data Center Solutions team. Prior to his current role, Domenic held technical, sales, and management roles during his more than 14 years at APC including Customer Service Team, Inside Sales Manager, District Manager, EAM, Business Development, Director of the Availability Science Center, Enterprise Regional Manager. In his most recent role as Director of the NYC and Philadelphia Metro Region, he was responsible for helping large corporations improve their enterprise IT infrastructure availability. Domenic is a frequent speaker at various industry conferences on topics such as business continuity, physical infrastructure of information technology, and data center design. Domenic holds a Bachelor of Science degree with honors in electrical engineering from the University of Rochester and is a member of the Tau Beta Pi Engineering Honor Society.


No Responses