Just as an automobile benefits from regular servicing, your data center will benefit from periodic health checks to its cooling system, to identify any potential flaws that may result in temperature-related IT equipment failure. The checks can also be used to establish a baseline to ensure that subsequent corrective actions result in improvements and to evaluate whether you’ve got adequate cooling capacity to deal with future data center plans.
A thorough cooling system checkup should include an examination of these nine items:
1. Maximum cooling capacity. If there isn’t enough gas in the tank to power the engine then no amount of tweaking will improve the situation. Check the overall cooling capacity to ensure that the IT equipment in the data center does not exceed it. Remember that 1 watt of power consumed needs 1 watt of cooling. Excess of demand over supply will require major re-engineering work or the use of self-contained high-density cooling solutions.
2. CRAC (computer room air conditioning) units. Supply and return temperatures and humidity readings must be consistent with design values. Check set points and reset if necessary. A return air temperature considerably below room ambient temperature indicates a problem in the supply air path, causing cooled air to bypass the IT equipment and return directly to the CRAC unit. Check that all fans are operating properly and that alarms are functioning. Ensure that all filters are clean.
3. Chiller water/ condenser loop. Check condition of the chillers and/or external condensers, pumping systems, and primary cooling loops. Ensure that all valves are operating correctly. Check that DX systems, if used, are fully charged.
4. Room temperatures. Check temperature at strategic positions in the aisles of the data center, generally centered between equipment rows and spaced approximately every fourth rack position.
5. Rack temperatures. Measuring points should be at the center of the air intakes at the bottom, middle, and top of each rack. Record these temperatures and compare them with the manufacturer’s recommended intake temperatures for the IT equipment.
6. Tile air velocity. If a raised floor is used as a cooling plenum, air velocity should be uniform across all perforated tiles or floor grilles.
7. Condition of subfloors. Any dirt and dust present below the raised floor will be blown up through vented floor tiles and drawn into the IT equipment. Under-floor obstructions such as network and power cables obstruct airflow and have an adverse effect on the cooling supply to the racks. (See our previous post: Overhead Cabling Can Reduce Data Center Energy Costs)
8. Airflow within racks. Gaps within racks – such as unused rack space without blanking panels, empty blade slots without blanking blades and unsealed cable openings – or excess cabling will affect cooling performance.
9. Aisle & floor tile arrangement. Effective use of the subfloor as a cooling plenum depends upon the arrangement of floor vents and positioning of CRAC units. (For a more detailed description see the APC by Schneider Electric white paper, “Cooling Audit for Identifying Potential Cooling Problems in Data Centers.”)
Performing a cooling system health check is just one of 10 tips for ensuring good data center cooling performance. Check out the other nine in the APC by Schneider Electric white paper, “Ten Cooling Solutions to Support High-density Server Deployment.”