This audio was created using Microsoft Azure Speech Services
In my last post I discussed the free Schneider Electric Energy University course, “Calculating Total Cooling Requirements.” This time I thought I’d write about a companion course, “Calculating Total Power Requirements.” Between the two of them, you’ll be able to come up with a good estimate of your total data center power and cooling requirements – and to make sure both are “right-sized” such that you’re not spending too much on either.
As the power course details, data center managers often over-size their power requirements by as much as 70%, leading to drastic under utilization and wasted investments. Avoiding such a fate requires an organized methodology, such as the one outlined in the Energy University course.
As with cooling, the idea behind calculating your total data center power requirements is to align power with the current and future requirements of your IT equipment and supporting infrastructure, such as cooling and UPS systems.
The process starts with a needs assessment that identifies the availability requirements of the data center from a power and cooling perspective. These are the familiar configurations such as N, N+1 or 2N.
The next phase involves identifying resources that contribute to power load and their power requirements. By the end of the exercise you’ll complete calculations to derive both the total load and the critical load. The total load is the sum of the power consumed by the installed IT and physical infrastructure equipment while the critical load is load that must be served and protected. That includes all of the IT hardware components that make up the IT business architecture: servers, routers, computers, storage devices, telecommunications equipment, etc., as well as the security systems, fire and monitoring systems that protect them.
This section of the course explains why you shouldn’t just take the nameplate rating of IT devices when estimating their power draw – at least not if you want to save money. It also discusses how to estimate future critical load requirements, recognizing that IT equipment is under an almost constant state of change.
When it comes to supporting infrastructure, one important piece is UPS load, which again comes with plenty of variables and issues to consider. They include existing load, future load and the efficiency rating of the UPS and its battery charging system. There’s also lighting to consider and the course provides a fairly simple way to calculate those requirements.
Cooling comprises the largest draw on the data center requirements, so the course goes into some detail on how to get a handle on that. (Although, as noted in the power course, for a more complete picture you’re better off taking the “Calculating Total Cooling Requirements” course.)
With all these calculations in hand, you’ll learn how to come up with a final electrical capacity computation, which is not quite as simple as adding up all the numbers. Rather, you need to size power to the peak consumption of your various loads, plus or minus any factor you want to use – or are required to by code or standard engineering practice. The course will guide you through these calculations, ultimately helping you determine through a simple equation the amount of power you need coming from the utility.
You’ll also learn about the numerous steps and calculations involved in selecting an appropriate backup generator. It’s an enlightening exercise through which you’ll learn why, for various reasons, you can’t just take the size of your load and assume that a generator of equal size will do the job.
The good news is that as data center architectures have evolved over the years to more mobile, modular components, it provides more flexibility in data center design, making it easier to right-size both power and cooling requirements.
To learn more, take the free course, “Calculating Total Cooling Requirements.” You’ll find it in the College of Data Centers at Energy University.