Solving AI’s growing pains at the edge with modular data centers

This audio was created using Microsoft Azure Speech Services

Businesses across industries are increasingly leveraging artificial intelligence (AI) to tackle data-intensive tasks that are beyond human capacity—both in terms of speed and feasibility. This AI revolution is creating unprecedented demand for computing power at the edge, much like the surge in data collection and analysis just a few years ago. However, the scale is now far greater, with AI requiring significantly higher rack density at edge computing sites. This is where modular data centers can be utilized, providing the agility and scalability organizations need to keep pace with AI’s rapid growth.

role of modular data centers with supporting AI

AI is projected to grow at an annual rate of 33% between 2023 and 2030. Recent advances in generative AI are central to the AI boom, with organizations replacing or supplementing human tasks to drive efficiencies and increased productivity. Tools such as ChatGPT and Microsoft Copilot democratize AI use, enabling just about anyone in an organization to leverage the technology.

The possibilities for AI use cases are almost limitless, ranging from healthcare to finance, manufacturing, transportation, and entertainment. Tasks such as creating presentations or compiling sales reports can now be quickly completed by feeding relevant data to an AI engine and asking it to organize the data in a specific way.

Organizations are also adopting AI technology for predictive analytics, intelligent supply chain management, and further personalization of customer service. The data requirements associated with AI are driving new chip and server technologies, resulting in much higher rack power densities. At the same time, the demand for high-density compute power is growing.

AI demand at the edge

AI creates vast business opportunities to develop and deploy new and exciting capabilities. Still, it also creates significant challenges around how to deploy the required data center hardware and infrastructure. It demands new approaches at the edge, such as the deployment of scalable modular data center infrastructure.

The reason for placing AI capabilities at the edge is the same as what drove companies to the edge in the first place:

  • Tighter control and security over company data
  • Low latency for real-time and near real-time tasks

AI is very data-intensive because of its two main tasks – training and inference. The training part involves feeding massive amounts of data to a model to improve the knowledge database. The more information the algorithm receives, the more intelligent it becomes. The model then makes inferences based on the data to solve problems and complete tasks.

IT infrastructure challenges

From an infrastructure standpoint, the amount of computational power necessary is substantial. Consider that traditionally a rack would have 10kW capacity, but we are now seeing requests for 50kW to 100kW. Schneider Electric, for instance, is developing a reference design leveraging nearly 90 kW racks. The design will be available later in 2024. Servers designed for AI not only feature multiple processors in close proximity, but also incorporate advanced chipsets that enhance processing power and efficiency. This kind of density generates massive amounts of heat, which may not be addressed with traditional air cooling.

Instead, liquid cooling technology is introduced to dissipate processor heat. Water or another liquid is routed to the chips through a Coolant Distribution Unit (CDU) to absorb the heat. The liquid from the secondary loop is then piped to a chiller unit and back to the CDU. There is an impact on infrastructure, requiring more pipes and manifolds to move the liquid through the racks and out of the building. The structural support needed for a separate chilled water loop, in addition to the usual cables for power and fiber, adds to the challenge of space in the data center, which is always at a premium. Now consider the challenges this creates in compact edge spaces.

With various workloads, power capacities, improved cooling solutions, and complex IT rack solutions, the next-generation AI data center infrastructure will take on a design that is forward-thinking and dynamically engineered. 

Modular data centers to the rescue

AI workloads are estimated to represent 20% of total data center energy consumption by 2028. Modular data center solutions can address AI enterprise needs at the edge, providing the infrastructure, compute and electrical power, and cooling necessary for AI model training and inference.

Each unit delivers purpose-built modular infrastructure customized to specific use cases. The designs, which will be available later in 2024, can then be scaled into repeatable clusters for deployment wherever organizations need rapid, adaptable, scalable infrastructure to achieve strategic AI goals. As such, Schneider Electric helps clients solve AI challenges so they can fully leverage the technology to become more competitive.

As AI applications and use cases continue to expand, it’s no longer a question of if but when organizations will need to scale their IT infrastructure to meet the demands of data/cooling/power-hungry AI solutions. To help your business become AI-ready, explore our Transitioning to AI-Ready Data Centers resource site, where you’ll find best practices, white papers, webinars, and more.

Tags: , , ,

Add a comment

All fields are required.