Data Center

Allocating Energy to IT Users

Enterprises often waste vast amounts of money running inefficient data centers. Reducing a data center’s power consumption can lead to great savings in energy costs, sometimes up to 80%.

In a previous blog post, we discussed how devising a data center’s energy-management strategy must begin with an assessment of energy use. That means measuring and allocating energy costs as well as carbon to IT users.

Start with the servers

A simple way to begin this process is to focus on data center servers and their energy consumption, after which you can allocate the servers to an IT user to determine that user’s overall energy costs.

But the energy required to run a data center goes well beyond the server itself. It requires a power supply, lighting, cooling, storage, network use and auxiliary loads. By factoring in those costs on a per-server basis, it becomes easier to accurately assign energy costs to IT users.

Let’s take, for example, one server with an energy allocation of 340 W. While the actual power of the server itself is 340 W, other elements of the data center are involved in making the server run. They could be broken down as follows:

  • Cooling – 360 W
  • Storage – 90 W
  • Power – 75 W
  • Network – 35 W
  • Lighting – 15 W
  • Auxiliary – 15 W

Add those up and now our server with 340 W of power now has an energy allocation of 930 W!

Of course, all servers are not created equal – they come in many sizes, can consume varying levels of power and utilize other data center resources to different degrees. Still, for data centers in which the servers are identical or relatively uniform, this method is effective, if not precise.

Average vs. specific IT device

However, for data centers with a wide range of server types, this kind of energy-allocation ballparking is inadequate and inappropriate. For example, you may have one IT user who uses eight server blades as simple application servers, while another is relying on eight mainframes with terabytes of data storage.

Clearly the mainframes use far more energy than the blade servers, so to assign the same energy costs to both IT users is not only inaccurate (and unfair), it undermines the goal of achieving a realistic energy-consumption assessment.

One way to address this situation is to measure all IT devices and assign energy to IT users based on your findings. But for a large, complex data center, such an approach would be exorbitant.

An acceptable alternate method is to create a table of server types, each with their own approximate level of energy consumption. A simple server classification table, for example, might assign 4000 W of power to a big mainframe, 200 W to a web blade, and 90 W to a virtual server.

By counting and categorizing your servers, you can get a much more accurate picture of how your data center energy is being consumed. This process can be implemented using software from vendors such as APC by Schneider Electric or relying on a spreadsheet.

Remember, the ultimate goal is to reduce data center energy costs. The simple methods described above may not be ideal, but they will help you achieve your goal without the unnecessary expense of measuring every IT and support device in the data center.

For more information on data center energy efficiency, read the APC/Schneider Electric white paper, Allocating Data Center Energy Costs and Carbon to IT Users.


No Responses