Data Center

Just Chill, Let ‘Cooling Optimize’ Keep Your Server Inlet Temps Steady

If you’re reading this, then you probably already know that cooling systems are the second biggest energy consumer in data centers, second only to the IT equipment itself. For more lightly loaded traditional data centers and distributed IT sites, cooling can easily account for 20-40% of the energy consumed by an entire data center. And so, operators are looking for ways to minimize required cooling power and be more efficient, as pressure to reduce energy consumption and carbon emissions grows. But, at the same time, availability of the IT remains paramount – people lose jobs over it, after all – and effective, continuous heat removal is critical to maintaining availability. So, from the perspective of the mechanical system, what can you do to minimize energy consumption and yet also enhance the continuous availability of the IT?

Establishing an Efficient, Reliable Cooling System

As explained in Schneider Electric White Paper 225, “Optimize Data Center Cooling Systems with Effective Control Systems”, at a fundamental level, establishing an efficient, reliable cooling system involves 3 tasks: 1. selecting an appropriate cooling architecture, 2. adopting an effective cooling control system and 3. managing the airflow in the space; i.e., arranging racks into hot/cold aisles and using air containment. But once you’re in operation, what else can you do besides manually adjust set points, employ an economizer mode, have a good service plan, and hope for cool weather? One answer is to use Schneider Electric’s EcoStruxureTM IT Advisor Cooling Optimize. Cooling Optimize is an AI-based, closed-loop control system that automatically and continuously adjusts airflow to stabilize server inlet temps in a way that balances the need for cooling with the lowest possible energy use.

Challenges of Maintaining Proper Airflow Distribution

As I recently said in my blog announcing our new cloud-based CFD tool, proper distribution of airflow around IT equipment is the primary cooling challenge in data centers – even when the total cooling capacity far exceeds the IT heat load. This is a challenge because cooling system dynamics, particularly in a data center, are so complex. Consider an air-cooled packaged chiller system, for example. The energy impact of raising the IT temp setpoint is not so obvious. The chiller energy would go down, yes, but the cooling capacity of the CRAH might decrease…and CRAH fans might have to spin up more to compensate. Also, the dry cooler energy might go up since it will be operating more in economizer mode throughout the year. And what happens to the IT server fan energy as you raise that inlet temp set point? It’s difficult to figure out the overall energy impact of a seemingly simple setting change.

Further complicating matters, data centers are dynamic environments where the equipment population and layout change over time, and the heat load changes frequently in response to computing traffic. Non-uniform rack layouts and rack densities also lead to non-uniform cooling requirements. System efficiency varies by load, outdoor air temperature, cooling unit settings, IT room dew point, and more. People compensate for this complexity by just over-cooling and blowing tons of air. Obviously, this is wasteful and puts unnecessary strain on cooling units. These challenges and complications can be mitigated by adopting an effective control system that removes the human from having to understand all this stuff and make moment to moment decisions and actions.

Cooling Optimize Solves Many of These Challenges

Cooling Optimize is a supervisory control system that addresses the CRACs/CRAHs in the white space with the IT equipment. It uses a closed-loop system composed of rack-based wired or wireless sensors and an on-premise AI Engine appliance. The sensors measure temperature at the server air intake side. This temperature data, along with various points of information from the individual CRAC/CRAH units (to determine unit capacities in real-time), is collected and analyzed by the AI appliance that continuously monitors the environment and issues commands to the cooling units in the white space.

These commands include:

  • Turning fans on or off
  • Ramping fan speeds up or down
  • Turning the cooling units themselves on and off

By responding to ongoing changes in the environment, the AI Engine dynamically matches facility cooling to the IT load in real time. The objective is to maintain rack inlet temperatures at or below a user-defined threshold. Because data centers and distributed IT sites often deploy more cooling than necessary, the AI Engine can typically reduce energy consumption (and cooling unit wear and tear) by turning off excess cooling units, decreasing fan speeds, and balancing airflow. The graphic below illustrates a typical generic deployment of Cooling Optimize.

Cooling Optimize typical general deployment layout

Ensuring Thermal Compliance and Efficiency in Every Control Group

The Cooling Optimize AI Engine ensures thermal compliance and efficiency in every control group in a facility. A control group is a discrete area of the data center that is independently monitored and controlled by the system. A facility might have multiple control groups. What defines a specific group is that it is thermally isolated from the others, meaning there is no airflow passing between control groups.

Machine learning is used to analyze correlations in each control group, and build an empirical influence model of the thermal impact of each cooling unit on every rack sensor in the group. This model is continuously updated and is what drives the AI Engine’s cooling control decisions. In effect, the model can predict the consequences of its behavior before taking the action. This helps ensure temperatures are properly controlled and gives Cooling Optimize the ability to precisely deliver cooling wherever and whenever it is needed.

A Significant and Positive Impact on your Business

Cooling Optimize, even though its only affecting the cooling units in your data center’s white space, can have a significant and positive impact on your business. By controlling and adjusting CRAC/CRAH airflows continuously in real-time, we’ve seen customers experience some exciting and beneficial results including, on average:

  • 10% reduction in PUE
  • 38% reduction in cooling system power usage
  • 70% reduction in hotspots
  • 546 tons of CO2 reduced per site

To learn more about how Cooling Optimize works and how it can deliver significant cooling energy savings, improve IT availability, and help you achieve your sustainability goals, click HERE .


No Responses

Leave a Reply

  • (will not be published)