5 Ways Data Center Operations Can Protect Your Utility Bill

This audio was created using Microsoft Azure Speech Services

By Martin Brennan, Critical Facilities Manager

Let’s picture it for a moment: you’ve built your top of the line data center with all the bells and whistles. This room should service your IT needs for the next 10 years of growing servers. The dust has just settled from the heat rejection test where you consumed enough power to launch a satellite into space and you’re ready to sit back and relax.

 

The reality is that it’s going to take IT the next 10 years to return the raised floor space to those tested power levels, maybe even longer. What do you do with these huge cooling units in the meantime? There’s one unit cooling, one unit re-heating, one unit humidifying and one unit de-humidifying. All this activity is to satisfy one server in the room. This drives your PUE is sky high. It’s the equivalent of trying to drive a race car 1 mile per hour. Your 10 cylinder engine is pumping, but your foot is on the brake.

Design processes typically focus on maximizing a data center’s life, and not enough consideration is given to the start and middle. Building for the end can create operational issues that drive up yearly operating costs. That’s why it’s important to protect your utility bills from operating an empty data center.

Here are five ideas you should start using to further your understanding of what is going on in your data center, and help you minimize costs:

1.  Generate a heat load study. Know what level of power is being consumed in your room. Based on PDU power output to servers, estimate how much cooling is required to dissipate that heat. Add in any outside factors, lighting, PDU transformer heat, and roof or exterior wall loads. Convert your CRAC unit output from tons to kW. Round up the number of CRAC units you need to dissipate the heat load.

2.  Designate master CRACs based upon the heat load data and CRAC unit proximity to the loads. Designate all remaining CRACs as standby units. Set temperature and humidity on Master units in line with your IT server standards. Open the tolerances on the standby CRACs.

3. Cycle your Masters/Standby CRACs and re-visit your heat load study on a monthly basis. This should help distribute the number of run-time hours on compressors and condensing fans so that you are not having failures on your CRACs closest to the heat loads.

4. Seal all floor tile power penetrations and install blank plates in all non-occupied cabinets. There’s no way around this one.

5. When performing temperature checks in your data center, try to standardize your readings.  Choose a consistent location to take your readings, for example: cold isles, server cabinet doors @ 4′ AFF. This can help in troubleshooting airflow problems and identify any possible air-dams or need for additional perforated tiles.

In the event of a master unit failure, the standby units will still cool the space as the temperature starts to rise in the room. You can then adjust temperature set points until the unit has been repaired.

Here’s an operational mistake to avoid: Powering off excessive CRAC units. You may get away with one or two, but watch out! To control costs when purchasing a raised floor, many times rooms are fitted on day one with all the perforated tiles required to operate at capacity. This decision removes the ability to power off individual CRAC units. The result is for each CRAC that is powered off, airflow decreases across every perforated tile. The few cabinets with servers can suffer from the low CFM through the tile and result in high temperature alarms.  This is why we open the tolerances on the CRAC rather than powering off. Static pressures are maintained and the perforated tiles that are supplying a server do not become subject to low airflows.

Last but certainly not least, be sure to document all of your actions. Take snapshots of your PUE and operations before you incorporate your changes, and then document your energy savings. Now that you have completed these five items, you can bring down cooling costs for further savings and sleep easy at night knowing your decisions are backed by data.

 

Tags: ,