Colocation

Making the Next Generation of Data Centers More Sustainable

Major social and business trends are emerging and they have the capacity to alter the way that hyperscale and colocation data center facilities are designed. That is my big takeaway from the Innovation Talk Webinar: The Evolution of Cooling Technology and Design for Data Centers and What Comes Next. These changes include a higher emphasis on sustainability, a steep increase in required rack power densities, and real estate portfolios with sufficient space to allow the integration of on-site renewable generation. In particular, we discussed how innovative cooling technology can be applied to address these trends.

Access Innovation Talk OnDemand Webinar: The Evolution of Cooling Technology and Design for Data Centers and What Comes Next

At RED, we design MEP and ICT engineering services for buildings. We often engage with colocation and hyperscale data center providers to design facilities that optimize both floor space and energy efficiency. When a project is under construction we oversee the installation to ensure it meets the specified requirements and we then supervise the testing and commissioning to make sure the data center performs as expected. So in my role, I have a bird’s eye view on how these market shifts are set to transform data center design.

sustainable colocation data centers

Impact of Trends on Future Colocation Data Center Designs

Below is a summary of the key trends that I see influencing the coming generation of data center design:

A premium on sustainability

Sustainability has been a hot topic for many years, however, in the past there has often been some reluctance to invest CapEx on sustainability measures, particularly in the highly competitive world of colocation data centers. This is changing as governments and big businesses emphasize the importance of acting now to limit climate change. Data centers represent significant greenhouse gas (GHG) emissions and significantly impact Scope 2 and Scope 3 emissions respectively for the operator and the tenant. However, sustainability is now highly targeted for several reasons. In the long run, most sustainability measures are cost effective; data center operators and the tenants want to reduce their carbon footprint to benefit the environment and they want to be seen as green.

On-site renewable energies to power data centers

For any colocation provider wishing to attract new tenants, energy supplied primarily from fossil fuels is becoming a negative topic in negotiations. Many of the hyperscalers are now buying renewable energy, however, much of that energy is actually used elsewhere, while fossil fuel-based energy is consumed in their data centers. While buying renewable energy to be used by others is laudable, to truly minimize the environmental impact of a data center project, we need to consider utilizing locally generated renewable power. Many data centers are built in out-of-town locations where land is relatively cheap. Why not buy additional land and locate some renewable power generation directly adjacent to the data center? This has the effect of eliminating transmission losses, genuinely minimizing environmental impact, and also offering a great corporate social responsibility pitch.

Liquid cooling to maximize per rack power densities

Around 20 years ago, colocation power densities averaged about 2kW per rack. Today, some hyperscalers are running at average densities of about 10kW per rack. In such cases, rows of 20-to-40 racks produce a huge amount of very condensed heat. Whilst there is not really a limit on the load that can be air cooled in an individual rack, data center operators will soon reach the point where they can no longer physically force enough cool air down the cold aisle to meet the requirements of the IT equipment. Aisles could be made wider to accommodate more air, but such a practice erodes the original purpose and cost benefit of packing as much power density into as small a space as possible.

Therefore, one solution is a move towards a direct liquid cooling approach. One example is full immersion cooling, where the server is housed in a chassis and the main processing chips and motherboard are immersed in a dielectric fluid, which keeps components cool and is a very poor conductor of electric current and thus doesn’t short out electrical components. Water is supplied to a manifold at the back of each rack, typically at around 40°C, and then returns to dry coolers or hybrid coolers at around 46°C where it is cooled. This means that 100-percent free cooling can be achieved in virtually all geographic locations. In addition, the water is at a good temperature to provide heating and could be exported to other buildings or a district heating system, both bringing in additional revenue and eliminating wasted energy. Chassis-based direct liquid cooling can typically operate at around 40kW per rack. This results in a 75-percent reduction in racks for a given IT load as well as a far more space efficient cooling system enabling a smaller, lower cost, and faster built data center with less embodied carbon, thereby further reducing the overall carbon footprint.

Watch the OnDemand Webinar on How Cooling Technology is Evolving

Hyperscale and colocation providers are about to experience more disruption to data center real estate portfolios, cooling systems, and sustainability levels. To access further insights on the role data center cooling has towards sustainability, watch this OnDemand webinarThe Evolution of Cooling Technology and Design for Data Centers and What Comes Next

 

About the Author

Nick Vaney is the Chief Technical Officer and one of the founders of RED, driving technical expertise, innovation, and excellence in the field. A chartered mechanical engineer with over 30 years’ industry experience, Nick is an Accredited Tier Designer with the Uptime Institute. He has designed and managed projects and client portfolios throughout EMEA for both UK and International clients. https://www.linkedin.com/in/nick-vaney-0a086510/


No Responses

Leave a Reply

  • (will not be published)