Liquid to the server cooling technology, which has been used for years in the computer gaming and high-performance computing communities, is slowly making its way into traditional data centers and bringing with it the promise of greater efficiency and more environmental flexibility.
Today most data center cooling involves cool air passing over heat sinks within the servers to remove the heat. This hot exhaust air then typically makes it way to an air conditioner unit, whether in the row, overhead or on the perimeter of the room. This unit rejects the heat to the outside environment and the process starts all over again.
As its name implies, liquid to server cooling brings the cooling fluid much closer to the servers or computer chips that need cooling. The technology can take one of two general approaches: Direct Liquid Cooling (DLC) or Total Liquid Cooling (TLC).
The direct liquid approach involves placing a small, fully sealed heat sink on top of the server board or chip that needs cooling. As the board generates heat, the heat is transferred into the heat sink, which is basically a metal plate full of cool liquid. As the liquid inside heats up, tubes connected to the plate transfer the liquid outside to a cooler that rejects the heat outdoors and routes the cooled fluid back to the heat sink.
With this approach, it’s possible to absorb about 40% to 60% of the heat generated by a server. The rest is removed via air, so you’ll still need air conditioners in the data centers as we typically see today.
The total liquid approach involves no air-cooled components. Instead, the server is completely immersed in a dielectric fluid or mineral oil solution that absorbs heat. In practice, it typically involves taking an entire IT rack of servers and laying it on its back in a tub full of fluid, something like a bathtub, with network and power cabling hanging from rails above. All the heat generated by the servers is absorbed into the fluid and, once again, the fluid is continually pumped away to be cooled and returned.
While the immersion approach has the advantage of removing the need for any sort of air cooling, it does complicate serviceability. Replacing any server components, for example, involves removing the board from the fluid and letting it dry before performing the replacement and submersing the board back in the fluid – a time-consuming process.
Another form of total liquid cooling, involves placing each server board or blade inside a housing that’s sealed and full of a dielectric fluid. The exterior of that housing is a heat transfer plate with a secondary cooling liquid running through it to reject the heat from the fluid surrounding the server board. Each housing fits into a larger chassis. The secondary cooling liquid from the exterior of each housing flows to the chassis, from there it rejects the heat to the outside environment.
Both TLC techniques make the compute environment far less susceptible to fluctuations in humidity and air quality, such as dust and particles. And the sealed immersion technique is especially well-suited for ruggedized applications such as warehouses, factory floors and outdoor environments, such as for military use. Because the compute environment is fully sealed, there’s no concern about dust, sand or other contaminants getting in.
What’s more, because the TLC techniques require no additional fans or air cooling, it can survive far longer in the event of a power outage. Even without replenishing the liquid supply, servers could expel heat for up to an hour or so before the liquid would become too hot to cool the load. That is typically plenty of time to restore main power, shift to backup power or gracefully shut down the IT equipment.
Liquid cooling, particularly TLC, is also far quieter than traditional data center cooling systems because they require fewer fans to move air, or even none at all. That also makes them more energy efficient, along with the fact that they greatly reduce or eliminate the need for compressors.
What’s more, the heated liquid that immersion systems generate can be used to supply heat to radiators in offices or other buildings. They also allow for greater flexibility in data center design, because there’s no longer a need for hot aisle/cold aisle configurations; put your racks wherever you want.
I’m hearing rumblings in the market that DLC and TLC systems can save anywhere from 40% to 90% on data center cooling costs. For that reason, many seem to think it’s inevitable.
To be sure, we’re in the early phases of this market, but it’s certainly one to keep an eye on. To find out more about new solutions, take a look at what Iceotope is doing.
I’m interested to learn what others are hearing about liquid cooling. Have you seen or heard of any good examples? Which type seems to be working best? Are there any more downsides or benefits that we need to consider? Most important, do you have any plans for the technology? Let’s get a conversation going, I’m looking forward to your comments below.