Something happened on the way to sustainability – the race to AI. For years, we’ve discussed the urgent need to reduce our carbon footprint and focus on renewable energy sources. But then came the rogue wave of AI.
The introduction of generative AI has set corporate imaginations ablaze about how to transform operations, from content creation to product design to process automation. These are all good things, but there’s a catch: Advanced AI requires accelerated compute GPU-based IT and data center capacity to scale out, requiring more grid power. It has triggered a rush to expand data center space at a time of immense pressure to reduce carbon footprint.
This raises questions about where to source the needed power and how AI inference will add to the challenge.

Where is the power?
I believe enough power is already available for data center expansion but utilizing it in this way requires some counterintuitive steps. The industry would love to keep their carbon neutral and net zero commitments, but is now considering “bridge power.” This term essentially means data center operators will use whatever type of power is available, regardless of how carbon-friendly it is, to run their facilities with the intention of phasing it out later as lower carbon alternatives come online. We are also seeing the data center industry leading the way in offsetting these temporary carbon emissions with VPPAs (virtual power purchase agreements) and RECs (renewable energy credits).
In Northern Virginia’s Data Center Alley, which processes about 70% of the world’s internet traffic, a sort of time travel is taking place. To meet demand, Data Center Alley is looking to get power from coal-fired plants in West Virginia that were scheduled for closure. In Abilene, TX, developers of a 360MW data center applied to build a natural gas plant at the site. The facility will be the first built by Stargate, a joint venture of OpenAI, Oracle, SoftBank, and investment firm MGX and they are planning to replace the natural gas bridge power solutions with lower carbon solutions as soon as they are available.
A matter of AI inference
As we deal with the power challenge, another question emerges: AI has focused so far on training models, but when is the inference wave coming, and how much power will it require?
In my view, the inference wave has just begun. Whenever you ask AI to generate content, give an opinion, or produce a result, that’s inference. The super dense Large Language Models (LLM) built for training are being leveraged to produce these results, doing double duty as inference. Despite the belief that small AI data centers will sprout at the edge for inference, we’re seeing that the AI chip clusters built for training are being dedicated to inference as they age. And that’s taking place in traditional data centers.
AI solutions and constraints
There’s no question that the need for power is going to intensify, especially when Agentic AI starts gaining traction. Agentic AI is a form of advanced inference that will power autonomous systems, enabling them to make decisions with minimal or no human intervention. We don’t know exactly how much change Agentic AI will bring and how much that will impact power requirements, but we know it won’t go down.
Although enough power is currently available, we must prepare for the future. We need solutions precisely matched to the IT needed to help meet the challenge. For instance, Schneider Electric provides on-site battery energy storage solutions (BESS) for data centers and edge sites. The batteries can be charged with renewable power that the sites can use instead of drawing power from carbon sources.
Emergency generators are also being managed as strategic assets to power data centers during times of grid stress. Powering these generators with biodiesels from vegetable oils, animal fats, and recycled cooking oils will help minimize emissions at data centers and edge sites.
As we work on these solutions, we shouldn’t ignore a situation that could slow down AI adoption, including backlogs and geopolitical issues. Currently, Nvidia is experiencing a backlog for its latest generation of GPU processors for AI, and it could take until 2026 for the vendor to catch up. Also, controls on exporting and localizing manufacturing may affect availability. Inevitably, some AI projects will be affected.
Adapting to the unexpected
So, yes, the race to AI is on, but it’s just beginning. It’s bound to evolve in unexpected ways, just as Generative AI has caused unforeseen demand. Schneider will continue working with partners like Nvidia and data center operators on solutions to support AI demand while also working with data center operators to develop and execute sustainability strategies. We’ll do our best to expect the unexpected.
Add a comment