This audio was created using Microsoft Azure Speech Services
As processor technologies continue to evolve and as artificial intelligence (AI) workloads proliferate, rack power densities within data center spaces are increasing. In High Performance Computing (HPC) data centers, for instance, a single IT rack full of servers performs billions of operations per second, producing intense heat over a small area that must be removed. At this high of a heat density, air cooling may not be sufficient for the needed heat removal. Instead, liquid cooling must be employed.
Advantages of rack-based liquid cooling in high density data centers
Data center owner/operators who are moving towards a more compute-intense future are considering an eventual transition from air to liquid-based rack cooling systems. Liquids offer a far higher heat removal capacity than air. They also demonstrate the potential to reduce cooling energy consumption per square foot of data center space, which can lower costs and reduce carbon emissions. Liquids also make it more feasible to reuse waste heat to drive other benefits (such as heating nearby homes or producing electricity).
For now, facilities requiring the use of liquid cooling are in the minority. Across most data centers, normal rack deployments rarely exceed power densities of 10kW per rack. A few sites may reach densities of 15 to 20kW per rack, but these are the exception. In fact, environments below 35kW per rack should be able to continue operating using standard air-cooled designs. For many, the cost of migrating to a liquid-cooled environment and displacing existing power systems with upgraded PDUs, plugs, and busbar does not yet merit investment.
Colocation providers weigh liquid cooling options
However, colocation providers who are approached by tenants wishing to deploy high density racks now need to consider all options for efficient cooling components. Colocation stakeholders will also need to determine how the acceleration of the liquid cooling trend will impact their greenfield design/build and brownfield modernization decisions.
If liquid cooling is the next step, then tradeoffs surrounding how rack power densities and power distribution might be affected should be considered. If not, facilities attempting to accommodate workloads that require liquid cooling could find themselves at a competitive disadvantage further down the road due to insufficient power distribution designs.
Key takeaways from study on rack power densities
Engineering teams at Schneider Electric commissioned an internal study directed at evaluating the impact of liquid cooling on power distribution in the IT room. The purpose of the study was not to compare the merits of air cooling to liquid cooling. The goal of the study was to determine how best to optimize power infrastructure costs while maintaining reliability and safety for power densities exceeding 35kW per rack (densities where liquid-based cooled systems may come into play). The study focused on IEC power environments, observations and considerations may be similar for North America. Below are some highlights that summarize how a migration to liquid cooling will impact high density power distribution designs.
General observations
- Current air-cooled approaches are able to support rack densities of up to 35kW per rack, which existing power distribution equipment and infrastructures can accommodate without too much of an issue.
- 63A is the maximum ampacity of rack PDUs. Anything higher than 63A on the PDU will make the required power cords too big and unwieldy to handle. At 400V, this will yield a maximum rack density of 35kW per rack.
- Above 63A, the plug standards will need to change. Plugs will require switch disconnects to prevent arc flash and could require an electrician to connect.
- In general, the higher the power delivered, the higher the short circuit requirements. Some equipment may have limited short circuit capabilities, such as rack PDUs.
- Schneider Electric’s Acti9 miniature circuit breakers up to 63A have very high short-circuit limiting capability. This reduces the actual short circuit current seen by downstream equipment.
- For higher rack densities, multiple pairings of 63A rack PDUs and feeder circuit breakers will be required.
- Deploying smaller multiples of 63A rack PDUs instead of a single 100A or 150A rack PDU is a safer approach as it reduces the likelihood of potential arc flash, limits short circuit current, and does not require an electrician for installation or for operation of the power equipment in the IT space.
Specific considerations
Schneider Electric engineers found that the maximum power combination for current rack PDU and circuit breakers, at 400V, range from 33kW to 35kW per rack. Higher rack densities will require multiple rack PDUs and circuit breakers per rack at N redundancy, (again, assuming 400V distribution).
Listed below are capacity categorizations based on the testing:
- a 50kW rack should have 2x 63A rack PDUs and circuit breakers
- an 80kW rack should have 3x 63A rack PDUs and circuit breakers
- a 100kW rack should have 4x 63A rack PDUs and circuit breakers
A 63A rack PDU must be a double-wide rack PDU to support the outlet density needed for the rack PDU’s rated capacity. A single rack can host up to two double-wide rack PDUs in its rear cabling channel. With the addition of a bustle kit, the rack can easily accommodate a total of six double-wide rack PDUs. Accommodating additional rack PDUs (above and beyond the 4x 63A configuration) may require a custom solution.
Regarding thermal considerations, power cords would need to increase jacket size to operate at 60°C (140° F) while maintaining the rated current. If the heat limit goes beyond 60°C, a warning sticker would be needed to indicate heat beyond the human touch-safe limit (i.e., too hot for a human to touch without burning their finger).
Practical guidance on high density data centers
As an expert in data center power and energy management, Schneider Electric is in a strong position to support colocation providers seeking to migrate to higher density data center environments. Our current research efforts and future liquid cooling reference designs will serve as helpful guides for lowering risks when implementing high density implementation projects. In the meantime, to learn more, download our White Paper 155, Calculating Space and Power Density Requirements for Data Centers.
With technical inputs from Julien Moreau and Daniel Rohr.
Corrections to misstated acronym and additional clarification on IEC regions where the study was focused have been applied March 30, 2021.