This audio was created using Microsoft Azure Speech Services
The 19th Century Irish mathematical physicist and engineer Lord Kelvin famously said: “If you cannot measure it, you can not improve it.” However, it is a basic principle of measurements that you should not start measuring something unless you understand the use that will be made of the data. Simply put, any measurement can prove useless if it’s taken at the wrong time, or with insufficient accuracy, or without detail regarding the conditions.
By contrast, excessive measurement at extreme precision may be extremely costly and burdensome whilst providing little additional benefit. When excessive precision is specified, expense and complexity escalate and return on investment (ROI) declines. The question is how precise do you need to be to run an effective data center energy management program?
Sitting between the two extremes of total instrumentation and measurement and zero measurement, is the idea of a “good enough” data collection strategy. By combining some measurement with a low-cost approach to modelling, the “good enough” system provides enough accuracy to secure management goals at low cost and with high ROI.
IT capacity can be measured in a variety of ways, but the emphasis here is on methods which are easy and quick to deploy. The simplest way to measure IT capacity is by the number of servers in the data center. If each IT user can be allocated a number of servers, then all that is needed is to attach an energy and carbon value to each unit.
This method does require the identification of all energy uses in the data center and its allocation on a per-server basis. It includes the server’s own energy use, coupled with that for lighting, storage, networking, power, cooling and auxiliary loads.
Of course, this method is highly simplistic and therefore could be prone to inaccuracy. So it’ll pay dividends to develop a power profile for each server type you use. That way, for example, the “overhead” attached to a blade server running web applications can be distinguished from a mainframe or even a blade server running ERP. Each different server has a base standard power level assigned to it, plus an allocation representing a fraction of the overhead power for all supporting networking, storage and power devices and infrastructure.
Having allocated energy use to IT users, it is relatively straightforward to calculate the carbon emissions created by each user. Although the “good enough” model may not be precise, it is sufficiently accurate to inform decision making and it can be enhanced over time as more accurate inputs can be used to improve the model.
When every wasted watt is an unrecoverable loss, even simple energy management offers the potential to make big savings. Because it easy and low cost to implement, the “good enough” system enables a fast ROI as users tap into the 20% to 90% reductions in data center energy use and carbon production which can be made when both IT behaviour and physical infrastructure are managed together. This can also help to justify further investment in a tailored approach to DCIM. You can get more details about “good enough” measurement by downloading APC White Paper #161; “Allocating Data Center Energy Costs and Carbon to IT Users”.