How to design AI systems that balance energy, cost, and performance 

by Jacques Kluska, Principal, Responsible AI Specialist

As AI touches every single industry, it’s becoming increasingly important to optimize systems for performance, cost-effectiveness, and sustainability. At Schneider Electric, we understand that building AI solutions requires a delicate balance between these factors. With our expertise in lifecycle assessments for hardware, we’ve applied similar principles to the design and deployment of AI systems. 

While AI sustainability standards are still taking shape, that doesn’t mean you have to wait to make choices. Every decision—how you source and prepare data, which models you select, where you run them, how often you retrain, what hardware you provision—directly affects energy use, emissions, latency, and spending.  

Drawing on our experience the world’s most sustainable corporation, I’ll outline how we evaluate each design choice through the lens of sustainability so you can see how deliberate engineering decisions translate into more efficient, responsible AI systems

As outlined in the previous blog, this first decision to make is to evaluate if AI is needed for the task at all. We advocate for using AI where necessary. If indeed AI is the technology is the most suited technology, here are the steps to follow: 

Step 1: Measure what matters (and acknowledge gaps)

Before diving into AI system design, it’s essential to measure the factors that matter most:  environmental impact, cost, latency, and overall performance.  

Our pragmatic scalable approach involves measuring what we can directly, using energy consumption and CO₂ footprint data where available, and estimating the rest (as some data on resource usage across the full lifecycle may not be available). We use tools and proxies responsibly, documenting our assumptions to ensure transparency and provide actionable insights for improvement. 

Step 2: Choose the right model for the job 

The next step is selecting the appropriate AI model for the use case. AI is not one-size-fits-all. Different tasks require different models, and it’s essential to match the model location and size to the use case. 

  • Cloud: Cloud models scale well for heavy compute tasks, but they come with increased costs due to data transfer and storage, which vary based on the location of the cloud service, the underlying hardware and the amount of transferred data. 
  • Edge: Edge models are deployed closer to the point of use, reducing latency and enabling real-time decision-making. This setup also helps minimize data transfer, making it more energy efficient. 
  • Embedded: For applications on constrained hardware (e.g., sensors or conveyor belts), embedded models, often compressed and task-specific, offer energy-efficient solutions without compromising performance. 

Smaller, task-specific models are often more energy-efficient than general-purpose models, reducing both energy consumption and operational costs. By right-sizing AI models for specific tasks, organizations can lower their environmental footprint while enhancing overall system performance. 

Step 3: Optimize AI for real-world constraints 

One of the important factors in sustainable AI deployment is understanding the regional electricity mix. The environmental impact of running AI models depends on where the energy comes from. For instance, countries like France, with a substantial share of nuclear energy, or Sweden, with high renewable energy adoption, provide AI systems with a lower carbon footprint compared to regions relying on fossil fuels. It is thus important to consider the region of deployment in the environmental impact calculation 

However, this optimization has to factor other parameters, such as data residency.  When deploying AI systems, it is essential to consider the legal and regulatory landscape of the region, particularly regarding data residency and cross-border data transfers. Organizations must ensure compliance with applicable laws on data localization and sovereignty, and conduct thorough risk assessments to safeguard personal and sensitive data when transferring it across jurisdictions. These measures help uphold privacy standards and build trust in sustainable AI practices.

Step 4: Factor in regulatory requirements and sustainability initiatives 

At Schneider Electric, we are proactively participating in the evolving regulatory landscape. We are involved in initiatives like the Sustainable AI coalition and the Green Software Foundation, and we’re actively working on developing specifications like the Software Carbon Intensity (SCI) for AI. 

While the full lifecycle analysis of AI solutions is not a short-term task, the framework we’re building enables more accurate assessments and continuous improvement in sustainable AI practices. By combining our experience with external initiatives, we ensure that AI systems not only meet current environmental standards but also adapt to future regulatory demands. 

Step 5: Put recommendations into practice 

To implement these principles effectively, we recommend the following actions: 

  • Select task-specific models tailored to the job at hand. 
  • Optimize model size through techniques like pruning and quantization, reducing energy use and cost while maintaining performance. 
  • Track environmental metrics such as energy consumption and CO₂ footprint consistently. 
  • Optimize deployment taking data residency requirements of each project into account. 
  • Document assumptions and strive for continuous improvement in data collection and sustainability practices. 

Schneider Electric’s deep expertise in integrating AI within industrial contexts ensures that our customers benefit from solutions that are not only high-performing but also environmentally responsible, reliable, and scalable. 

Final thoughts  

Designing AI systems that balance energy, cost, and performance requires careful planning and a commitment to sustainability. At Schneider Electric, we focus on creating AI solutions that are both efficient and effective, ensuring that they meet the needs of our customers while minimizing environmental impact. As AI technology continues to progress, our approach and focus remains the same: we leverage emerging technologies in a responsible way, to further enhance sustainability.

If you’re interested in learning more, tune into the AI At Scale episode with Claude Le Pape on how how frugal AI applies to real business use cases.

Add a comment

All fields are required.