How do liquid cooling reference designs optimize and accelerate AI data center deployments?

AI workloads, cloud deployments, and colocation requirements are driving unprecedented demand for power and cooling in data centers. As a result, data centers are reaching new levels of complexity that require tightly engineered, precision, end‑to‑end approaches to power and cooling.

Liquid cooling solutions provide a level of heat dissipation that traditional air cooling cannot deliver. Compared to air, water has over 23 times higher thermal conductivity and can store 3,000 times more heat per unit volume, making it ideal for high-density cooling. Industry analysts project the liquid cooling market will grow sevenfold, from about $2.8 billion today to over $21 billion USD by 2032.

Concept of data center and AI

The importance of liquid cooling data center design

As liquid cooling becomes the default for high-density facilities, incorporating a liquid cooling data center design is essential. Reference designs help operators make informed decisions and avoid costly missteps, especially since cooling failures account for 13% of data center outages, according to the recent Uptime Institute data. With 70% of outages costing more than $100,000 and a quarter exceeding $1 million, the stakes for getting liquid cooling right from the start are extremely high.

What is the role of reference designs in liquid cooling?

A reference design is like a recipe, listing the components and explaining how they fit together. Liquid cooling data center designs provide pre-engineered, validated blueprints complete with all requisite hardware, piping, manifolds, and fittings. And unlike a bill of materials, a reference provides guidance on how all the intricate parts fit together.

Why are reference designs critical for liquid cooling?

With air cooling, additional capacity is often added incrementally as hardware scales, resulting in a siloed approach. This doesn’t work with liquid cooling. For one thing, many AI servers have no fans, shifting more of the thermal burden to the facility’s liquid cooling system. Beyond that, high-density environments require holistic planning for all components, including the cooling infrastructure.

Reference designs account for current and future needs, taking into consideration growth in GPU demand, and the escalating heat flux and rack density. They also address Delta T (ΔT) flow rates, filtration, equipment serviceability, and the physical location of the coolant distribution unit (CDU).

How do liquid cooling reference designs accelerate time to market?

AI infrastructure designs have typically been slow and costly due to long test cycles for performance and compatibility, integration challenges, and security and compliance validation. Reference designs accelerate deployment with pre-certified hardware and software solutions, benchmarked configurations that deliver predictable performance, and documented best practices.

As such, liquid cooling reference designs deliver several strategic benefits:

  • Faster AI service delivery and quicker ROI
  • Greater predictability for capacity planning
  • Enhanced IT productivity and reduced operational costs
  • Competitive edge through accelerated innovation cycles
  • Reduced risk along with future-proofing AI factories

Why work with a supplier who understands data center Infrastructure?

With cooling integral to infrastructure planning, the liquid cooling provider plays a critical role. Liquid cooling is a system, not a SKU, so it is crucial to choose a supplier who understands data center infrastructure. Working with an experienced supplier ensures access to fully tested designs and a partner capable of aligning them across the cooling, mechanical, and IT domains. This accelerates procurement, commissioning, and deployment.

Schneider Electric, supported with Motivair by Schneider Electric, offers power and cooling reference designs co-developed with NVIDIA. The designs are turnkey, validated blueprints engineered for repeatability and scalability, and leverage Schneider’s extensive library of vendor-neutral templates.

Solutions like Motivair’s ChilledDoor® rear‑door heat exchanger, part of Schneider Electric’s liquid cooling portfolio, can remove tens of kilowatts per rack while reusing existing chilled water systems, and can be included as options in reference designs for brownfield AI deployments.

A supplier with proven reference-design expertise also helps avoid common deployment issues:

  • Incorrect Flow Coefficient (Cv) values – undersized valves that choke coolant flow and starve GPUs of cooling
  • Mismatched pump curves – pumps that can’t push coolant through restrictive GPU cold plates, leading to overheating
  • Improper loop ΔT – incorrect temperature differences between supply and return water, reducing cooling efficiency and stability
  • Controls logic gaps – missing or flawed automation rules that prevent the cooling system from responding quickly to GPU load changes

Best practices for liquid cooling architecture

AI hardware evolves every 12 to 18 months. Without a fully engineered thermal envelope, the data center infrastructure risks obsolescence before it’s even commissioned. An experienced supplier can design systems that survive multiple GPU refresh cycles and coordinate between OEMs and facility engineers. The right supplier has real silicon‑level thermal and transient behavior insights that account for next-gen heat flux, inlet temperature shifts, cold-plate geometry changes, liquid class evolution, and transient load behaviors.

As AI data centers evolve, densities rise, and workloads intensify, the cooling strategy must evolve. Suppliers who understand the entire data center infrastructure, including power and cooling components, can help data centers in delivering repeatable, reliable performance year after year.

Learn more about system architecture, liquid cooling, and deployment considerations by downloading our white paper on liquid cooling, which overviews six common liquid cooling architectures. Also, be sure to access our latest reference designs, 110 and 111, in our reference design library. Schneider Electric’s latest reference designs (RD110 and RD111), co-developed with NVIDIA, are engineered for Grace Blackwell GB300 NVL72 systems, supporting up to 142 kW per rack.

About the author

About the author

Amir Ibraheem, Senior Mechanical Systems Engineer, Data Centers Cooling

Add a comment

All fields are required.