[Podcast] Frugal AI. The other side of the equation. 

This audio was created using Microsoft Azure Speech Services

Frugal AI: compromise, balance, and wise approach 

Today not many businesses operate with the term “frugal AI”, but many of them know that measuring the footprint of their AI solutions is something they should start taking care of. In the third episode of AI at scale podcast we explain what frugal approach to AI means and how it applies to real business use cases.  

Claude Le Pape-Gardeux, Data and AI Domain Leader at Schneider Electric, defines frugality of AI solutions and shares his point of view on the “new energy” demand vs. the current need to be energy efficient. In the show Claude shares with Gosia Gorska, the host of the podcast, useful tactics to measure AI successfully and effectively. The episode’s second part is dedicated to the topic of constant improvement of measurement methods and cooperation with academic partners. 

Claude Le Pape Gardeux - AI at  Scale

Listen to the third episode of the AI at Scale podcast.  

Listen to Claude Le Pape-Gardeux: Frugal AI. The other side of the equation episode. Subscribe to the show on the preferred streaming platform (Spotify, Apple Podcasts). A full transcript of the episode is available below.

Transcript

Narrator: Welcome to the AI and Scale podcast. This is a show that invites AI practitioners and AI experts to share their experiences, challenges, and AI success stories. These conversations will provide answers to your questions. How do I implement AI successfully and sustainably? How do I make a real impact with AI? Our podcast features real AI solutions and innovations, all of them ready for you to harness and offer a sneak peek into the future.

Gosia Gorska: Hi, I’m Gosia Gorska, and I’m the host of the Schneider Electric AI at Scale podcast. I’m really pleased to introduce Claude Le Pape-Gardeux, data and AI domain leader. Claude is coordinating the evaluation of new technologies, the recognition of technical experts, the management of research and development projects, and partnerships in the data and artificial intelligence domain in Schneider Electric. Welcome, Claude.

Claude Le Pape-Gardeux: Thank you, Gosia. Happy to be here.

Claude’s Journey in AI

Gosia: Claude, I invited you to the show because I know that you have a huge experience in data and AI, not only because of your work in the field at Schneider, but also because you have 40 years of experience in the field. I know that you received a PhD in computer science from University Paris XI and a management degree from Collège des Ingénieurs. During and after your PhD, you also worked at Carnegie Mellon University and Stanford University. Now, that’s really impressive.

Claude: Yeah, I had the chance over my career to go to several very good places, both in academia and in industry, and to work in different fields of artificial intelligence. I’ve been involved in rule-based programming, constraint programming, machine learning, robots task planning, and path planning at Stanford. So, this has been really great to work on very different topics with practical problems in mind. In particular, manufacturing, scheduling, inventory management, long-term human resources planning, and so on. And, of course, in Schneider Electric, things related to energy management.

Gosia: Well, that’s a really diverse topic, but what was your PhD thesis topic, then?

Claude: My PhD thesis topic was about the use of rule-based programming and constraint-based programming to improve manufacturing scheduling in specific plants. In particular, there was a plant manufacturing the coaches for the first high-speed trains in France. So, this was a good experience as well.

Gosia: I didn’t know about the trains. That’s really impressive. I can imagine you were a perfect candidate to join Schneider, as we are a manufacturing company and we still have many plants and distribution centers all over the world. I wanted to understand, with your diverse background in research and development areas, what was the most surprising thing about AI after your college times?

The Ups and Downs of AI

Claude: What has always been strange with AI is the ups and downs in its popularity. It seems that there are times when people think AI is a solution to every problem, and other times when people think it doesn’t work at all. This is totally unjustified. My view is that AI techniques constitute a toolbox. With this toolbox, you are able to solve some kinds of problems, and there are some kinds of problems for which it’s not appropriate. We need to be rational about these things and evolve to use AI in the best possible manner.

Gosia: Yes, I remember you told me once that your work has always been at the frontier between AI and other disciplines, like optimization, simulation, and statistics. Where does your passion for this domain come from? When did you realize this is what you want to explore?

Claude: When you start with a problem-solving point of view, having a problem that you want to tackle to do things less costly, more rapidly, or using less energy, you find out that techniques developed in different domains can be applicable to your problem. You also find that subject matter experts’ knowledge can be very useful in building the solution. So, I’ve often been in situations where I was building hybrid artificial intelligence solutions. In particular, between AI and operations research, there have been many cases where the right solution was a combination of both, done intelligently to achieve the best results.

Gosia: Yes, I see. The practical aspect was definitely important for you. The fact that you could apply these solutions to different areas. I invited you today to discuss the carbon footprint of artificial intelligence, as this is a topic we hear more about. Everyone has high hopes for AI as a solution to many challenges, but people also raise concerns about the energy demand for AI. I saw a concept called frugal AI. How would you define frugality of AI?

Frugal AI and Its Importance

Claude: Yes, there are several aspects to your question. First, it’s a question for AI but also for digital solutions in general. The carbon footprint of using digital solutions is increasing. Some years ago, it was negligible, and now it’s no longer negligible. So, we need to care. For AI, there can be a big difference between using large models, which are expensive to develop and use, and simpler models that require less energy. The big question is whether you want a big model that can do a lot of things or a small model that can do fewer things. Frugality means obtaining the results you want without using more resources than needed. This means having the right amount of data and complexity in the model, depending on the problem you want to solve.

Gosia: But I had the impression that this term is not widely known and that there is maybe some misconception about what exactly it means.

Claude: Yes, the term has emerged recently because people are becoming conscious of the electricity consumption of AI. There are debates about how to define frugal AI. My preferred definition is to achieve a goal using the least resources possible. Others consider the overall balance between the resources used to create the AI system and the resources it helps save. For example, if you build an application to save energy or reduce carbon emissions, you should make a balance sheet of how much you use versus how much you save.

Gosia: So, for anyone wanting to apply AI or develop specific AI applications, there are decisions they need to make. How can they evaluate if their AI model is frugal?

Claude: The first question is whether to use AI or another technique. There are cases where you don’t need AI. For example, if you know the equations representing your problem and all the parameters, you might use linear programming from operations research. Data scientists need to know techniques from other domains to make a good choice. When AI is needed, you need to decide between using a lot of data for a precise model or less data for a less precise model. Consider the precision you need and the resources you put into building and using the model.

Measuring AI Efficiency

Gosia: Are there any methods to measure if an AI approach is frugal or if there’s room for improvement?

Claude: Today, there’s no complete methodology for data scientists to ensure they are doing the best job. The first step is to measure. At Schneider Electric, we’ve been estimating the carbon footprint of our AI applications. For example, in microgrid management, we look at how much carbon emissions our microgrid enables, the embedded carbon of the components, and the carbon footprint of the digital solution managing the microgrid. This gives us a view of what happens and where we might need to improve.

Gosia: I’m happy you shared this example. I can imagine a future where AI products have labels informing clients about their carbon emissions. What do you expect to find out from these evaluations?

Claude: We’re investigating other examples, like generative AI, which is a difficult case because many people use a big model developed once. It would be great to have standards explaining how to count and label these emissions. Last year, we made three examples, and moving from a few examples to a systematic methodology is a big question for the future. It’s important within Schneider Electric and more globally.

Gosia: Yes, I think it’s a great plan. You explained how to check if AI is the right solution and how to measure its efficiency. Now, I’d like to discuss continuous improvement. How can we improve efficiency over time?

Continuous Improvement in AI

Claude: Once you can measure things, you can ask how to improve. For example, in forecasting energy consumption of buildings, one way is to get a lot of data and build a specialized model. But can we do something simpler? We’ve started work on cold start forecasting, which tries to use little data and adapt models from other buildings. The study is not complete, but initial results are promising. Sometimes data exists but is costly to gather. We need to think about costs in monetary and environmental terms.

Gosia: Can we imagine comparing buildings’ operations thanks to AI applications, even comparing them to others in the same region?

Claude: Yes, this can be done with simple models. For example, in office buildings, energy consumption is closely related to outside temperature. We’ve built a building energy modeling tool to see this relationship and compare buildings. This helps evaluate improvements disregarding yearly climate differences. Simple models can be very useful.

Gosia: It’s great we have this research inside Schneider, but do you also cooperate with academia?

Claude: Yes, we cooperate in several ways. We fund specific chairs or labs, have PhD students splitting time between academic labs and Schneider Electric, leading to academic papers, patents, and ideas that can go into our offerings. Academia provides strong competencies and new ideas.

Gosia: I’m sure students listening can see what research areas Schneider is involved in. What would be your final message to our listeners?

Claude: Remember that there are many AI techniques and techniques from other fields. You need to choose the right technique for the problem. Focus on compromises, the quality of your results versus the resources used. Sometimes the dimensions are the same, like saving energy, and sometimes they differ, requiring a compromise and the right person to decide the best trade-off.

Gosia: Thank you very much, Claude. Your insights help make AI less of a black box and more understandable, measurable, and improvable over time. Thank you for being with us today.

Claude: Thank you. Thanks again for the invitation.

Like what you hear? 

 AI at Scale Schneider Electric podcast is live! 

The first Schneider Electric podcast dedicated only to artificial intelligence is officially live and available on all streaming platforms. The AI at Scale podcast invites AI practitioners and AI experts to share their experiences, insights, and AI success stories. Through casual conversations, the show provides answers to questions such as: How do I implement AI successfully and sustainably? How do I make a real impact with AI? The AI at Scale podcast features real AI solutions and innovations and offers a sneak peek into the future. 

Tags: , , , , , , , ,

Add a comment

All fields are required.