This audio was created using Microsoft Azure Speech Services
All too many data center operators are unable to answer simple questions about their data centers, such as where best to deploy a new server from a power and cooling perspective, and when they will reach the limits of their power and cooling infrastructure.
If your data center is either over-designed, meaning it has lots of spare capacity, or under-utilized, you can likely get along for a while without being able to answer such questions. But a number of forces are coming together to put more stress on data centers, including:
- Use of ultra high density IT equipment
- Pressure to control total cost of ownership (TCO) and more fully utilize data center capacity
- Rapid pace of change due to virtualization and refresh cycle of IT equipment
So the day of reckoning is likely not far away when you’ll have to be able to provide answers to basic questions about capacity. To do so requires a systematic approach to capacity management, which is the topic of a free course at Schneider Electric’s Energy University, “Power and Cooling Capacity Management for Data Centers.”
As the course makes clear, the foundation of capacity management is the ability to quantify the supply and the demand for both power and cooling. While such information at the room or facility level helps, it’s not sufficiently detailed to answer questions about specific IT equipment deployments. On the other hand, providing power and cooling supply and demand information at the IT device level is difficult to achieve and unnecessarily detailed. An effective and practical level at which to measure and budget power and cooling capacity is at the rack level.
The model described in the course quantifies power and cooling supply and demand at the rack level in four ways:
- As-configured maximum potential demand
- Current actual demand
- As-configured potential supply
- Current actual supply
You’ll learn more about each of these four measurements during the course as well as why the supply of power and cooling capacity must always be greater than or equal to demand to prevent the data center from experiencing a failure. This must be true at each rack, and for each supply device supplying groups of racks.
In practice, that means there should always be excess capacity – meaning overall supply is greater than or equal to overall demand. That excess capacity comes in four different forms for purposes of capacity management, which are:
- Spare capacity
- Idle capacity
- Safety margin capacity
- Stranded capacity
While it’s possible to keep track of all these measurements with paper and pencil, or by constantly updating a spreadsheet, neither is practical. With the dynamic changes made possible by server virtualization and constant changes in the demand for power and cooling capacity by IT equipment, a more automated solution is required.
The course will explain how power and cooling capacity management software works to address issues including:
- Presentation of capacity data
- Setting the capacity plan
- Alerting on violations of the capacity plan
- Modeling proposed changes
If it’s not here already, the day will soon come when your spare data center capacity runs out and you will need to be able to answer some basic questions about how best to utilize your data center capacity. Get prepared by taking a few minutes to check out the free course, “Power and Cooling Capacity Management for Data Centers.” You’ll find it in the College of Data Centers at Energy University.