[Podcast] AI governance decoded

Where does AI governance stop? 

Guess what? It doesn’t. AI governance extends far beyond setting policies and frameworks. In this episode of the Schneider Electric AI at Scale podcast, host Gosia Gorska discusses this topic with Ania Kaci, Senior Principal, Responsible AI Leader at Schneider Electric.

Ania shares her journey from AI research to leading responsible AI practices, explaining our comprehensive AI governance structure and multidimensional risk assessment framework. 

Moreover, she talks about the importance of involving diverse teams, including business, legal, and compliance experts, to fully address potential AI risks. The episode also covers the significance of transparency and explainability in artificial intelligence systems, with practices like system cards and early user involvement. 

Lastly, she discusses her efforts to close the gender gap in AI, underscoring the importance of diversity in building equitable AI systems. 

AI at Scale podcast

Listen to the AI at Scale podcast

Listen to Ania Kaci: AI governance decoded episode. Subscribe to the show on the preferred streaming platform (Apple Podcasts, Spotify) or play the episode on youtube.

Transcript

Gosia Gorska: This is Gosia Gorska, the host of the Schneider Electric AI at Scale podcast. Today I meet Ania Kaci, senior principal responsible AI leader from Schneider Electric. Welcome, Ania. 

Ania Kaci: Hello, Gosia. Hello, everyone. I’m delighted to join you today. 

Gosia: Yeah, I’m super happy as well. So Ania’s journey in AI has been quite interesting. After completing her PhD in AI, she spent several years building and architecting AI systems. Then she’s been leading technical teams. But what really caught her attention was seeing how these powerful tools were being deployed in the real world. That led her to her current focus, operationalizing responsible AI practices. She has been helping organizations understand how to build AI that’s not just powerful, but also trustworthy and aligned with human values, combining cutting-edge technology with real-world impact. Today, she’s in charge of responsible AI strategy, definition, and operationalization for Schneider Electric. Then what is one thing that I didn’t mention, Ania, but it’s important to know you better? 

Ania: Yeah. So there are two things. The first one is that I’m proud to co-lead Women in AI, a nonprofit organization dedicated to closing the gender gap in AI. Our work focuses on creating pathways for women to enter and thrive in the AI field because diverse teams build better and more equitable systems. And the second thing is that I teach AI ethics at Transport, where the main goal is to help the next generation understand the profound responsibility we have when we develop these powerful systems.

Journey to responsible AI leadership

Gosia: Exactly. So we talked to the right person today. My first question is actually about your personal journey because I was curious, how did you find yourself in the middle of AI governance and risk management? How did you become a responsible AI leader? What sparked your passion for this topic? 

Ania: Well, my path to becoming a responsible AI leader has been shaped by both my technical foundation and evolving awareness of AI’s impact. I began with an engineering degree followed by a PhD in AI, which gave me the technical expertise to understand the systems from the ground up. Then I spent several years as an AI delivery manager, working across diverse sectors where I witnessed how AI solutions transform industries but also create new challenges. The turning point came when I joined a big tech company as a responsible AI leader. There, I developed frameworks to ensure AI systems were not just powerful but also fair, transparent, and robust. I addressed complex questions about bias, privacy, and societal impact—questions that often had no simple answers. 

Inside Schneider Electric’s AI governance 

Gosia: Yes. And I expect that we will talk a bit about the frameworks and the structures in today’s conversation. Could you walk us through your current AI governance structure, who is involved, how decisions are made, and how accountability is maintained? For the listeners, as you may have discovered in past episodes, at Schneider Electric, we have a dedicated organization called AI Hub that is developing AI solutions for our customers, but also for Schneider’s internal functions. Ania is part of the team managing responsible AI governance and she is very well placed to tell us more about how this is organized. 

Ania: Our AI governance structure consists of three key components working together. The first component is a responsible AI committee, which sets the vision and strategic direction for our AI policies, comprising executive leadership who make high-level decisions about critical AI systems and AI application boundaries. Then the responsible AI Office, which includes diverse experts from compliance, policy, legal, technical, and SMEs who serve as the bridge between strategy and implementation. Finally, the responsible AI team, which I’m leading, handles day-to-day implementation and supports AI risk assessments, monitors AI systems, and ensures compliance with regulations and established guidelines. Accountability is maintained through regular reviews, documentation of decisions, and metric tracking around responsible AI performance against established benchmarks. 

Comprehensive AI risk frameworks 

Gosia: Yes. And as you may hear, there are several parts into it. What is particularly interesting for me is that you actually have legal people inside the team. So you can ensure that from the very beginning, from the conception of a given AI use case, you have the right people in place to advise if we can develop further this application and how it relates to current legislation. This is quite important to highlight. If we take a closer look at specific risk assessment frameworks, do you use some specific framework in relation to deploy AI? 

Ania: Yes, as you said, we involve different people, different teams to create these AI risk assessments and conduct them to fully address all potential AI risks. We need expertise from the business to ensure that we fully identify risks that go beyond the AI system or feature. We need to assess the technical risks and the impact of the AI system, which is most of the time embedded into existing AI applications or products. Our AI risk assessment process includes several dimensions. So a framework which evaluates, as I said earlier, technical risk on reliability, robustness, ethical considerations on fairness and transparency, for example. Then the impact assessment, which analyzes potential effects on stakeholders and for our sector and our services and what we propose to our customers, we include energy system safety and operational continuity for industrial automation, for example. The evaluation is based on a risk level. With higher risk applications for critical infrastructure, for example, our safety systems undergo more rigorous reviews including escalation when we define the escalation process. Our first review will be done with the Risk AI team and then we can escalate to the Responsible AI office and for the most critical applications to the Responsible AI committee. We have a domain-specific checklist tailored to energy management and industrial automation context and standards, and we conduct regular reassessments throughout the AI system life cycle. We start what we call the AI risk screening early in the design phase and then we keep assessing to ensure that we identify the AI risks and put in place the correct mitigation to ensure effective management of the potential AI risks. 

Gosia: Mm-hmm. OK. And when you say “we,” what do you mean exactly? I’m very curious who exactly is involved in this risk assessment evaluation? 

Ania: Yeah. So the responsible AI team has the role of oversight. We do have the expertise on what are the potential risks of AI systems, but it’s a team effort, it’s teamwork. Everyone is involved—the AI hub, the business, and the compliance legal. As I said earlier, we have an escalation process. First, we have the product manager who conducts the AI risk assessment with the business. Then we review as a responsible AI team the results and ensure that we don’t miss any questions or information. We clearly specify what is the intended use of the system because it really impacts the level of risk. Then we conduct reviews from SMEs, legal, compliance, and the responsible AI office to have different and diverse perspectives and ensure that we fully address all the potential risks of a given AI system.

Integrating business insights in AI risk management 

Gosia: Yes. And what’s really captured my attention is the fact that you mentioned that business is involved alongside the whole process, the evaluation of the risks because they are the ones who know the customer, they are the ones who know exactly how the product will be used and in which context. So I guess it’s quite important to have their perspective. This is what I’ve been reading about—that some systems are launched into production and then we face some challenges or even risks that the team developing the application didn’t think about just because they were a group of data scientists. From a technical perspective, there was no risk, but they didn’t know exactly how the application would be used in production in real life. This is what actually was causing the risk that they were not able to think about or imagine. So I guess that you are exactly hitting this point and mitigating potential risks related to having only data scientists working on it. Now here we have also business people, right?  

Ania: Yeah, exactly. We need to have diverse perspectives on AI risk. Most of the AI risks that we see today are more traditional AI risks, I would say. So it’s applicable for all the classic software and traditional software around the safety and security of AI systems. We rely on the existing AI risk frameworks and expertise around this risk management. Then of course we identify what is amplified by AI technologies and generative AI mostly. What is new is more on the bias, the fact that these AI systems are non-deterministic. So we can face bias in the data, the historical data that we use to train these models, bias in the validation of the process with the over-reliance on the AI outcome. We need to bring together all the expertise—the business technical experts and the legal and compliance—to ensure that we are aligned with the risk management that we have within Schneider and we are fully addressing these known AI risks. One important initiative is the engagement with the external stakeholders of the ecosystem, the standards bodies to work on translating the legal obligations into technical standards to ease the implementation and ensure that we are fully operationalizing responsible AI in a way to answer the regulatory obligations and ensure AI risk management. 

Translating regulations into practical actions 

Gosia: Talking about standards and regulations, how has your approach to AI governance evolved as regulations like the European Union AI Act have developed over time? 

Ania: What we have done is a responsible AI maturity assessment to analyze the practices that we have around responsible AI. Responsible AI is not only about regulatory compliance. There is a baseline accuracy. Ensuring that our AI systems are accurate and robust is essential. We have analyzed and assessed the maturity of the responsible AI assessment and identified the gaps. The most important initiative is to involve the legal teams early in the design phase. Previously, their work was at the end right before the deployment where they worked on the contractual clauses and how to engage with the customers. Now we have them early in the design phase to ensure that we fully interpret and understand the regulation and see how to translate them into more practical and concrete actions to put in place.

Ensuring AI transparency and explainability

Gosia: So we have covered the regulatory compliance you mentioned about involving business colleagues from legal and so on. But if we look then into the practice, into putting the application into production, do you have any specific practices implemented to make AI systems more transparent and explainable? Because I guess after discussing the risks, which is like one area, one hot topic for everyone, the second one is, OK, once you put it into production, how do you make sure that it’s explainable and transparent? As you mentioned, these systems are non-deterministic. So what can we do about it? 

Ania: The first step about the AI risk is to ensure that we identify the right level of risk. Depending on the level of risk, for low-risk optimization, for example, like your lighting controls, we can move faster. For medium-risk applications like your predictive maintenance, we require extensive priority testing. And for some critical systems that could affect core infrastructure, for example, we mandate what we call a shadow deployment so that the operators and end users can understand and validate the output of the system. We worked on what we call our system cards. We think of them as detailed ID cards for each AI system. For example, hallucination is a regular challenge when we want to redeploy generative AI. An ID card is part of our AI governance strategy where we design it to be a comprehensive tool for managing both responsible AI and ethical AI. It provides an overview of the AI system, which is a common way for all stakeholders to speak, understand, and assess the AI system. We are transparent. We document the performance of the AI system, the KPIs it uses, the success criteria, the limitations of the system, the data flow, and the level of human oversight. Depending on the level of risks, we ensure that everyone has the right knowledge of the AI system and the risks as well. Early in the design phase, we ask the users which information and explanations they need to better understand the outcome and adopt the AI systems.

Balancing technical complexity with transparency 

Gosia: Yeah, so this is definitely very helpful. But how about people who are really non-technical? Is there any way that you can balance this technical complexity with the need for transparency for people who are really not that familiar with all the technical details of AI systems? 

Ania: One solution is really to have this governance structure with diverse backgrounds and expertise. We ensure it’s like a test internally when we present the frameworks and the work of the Responsible AI team and the Responsible AI Office. We ensure that we don’t use too many technical terms and descriptions. We test it and validate the template with all the stakeholders and all the people involved in the Responsible AI Office. Once we validate these templates and the level of information needed to be transparent and explain the system, how it works, then we move forward and deploy what we created as frameworks. The goal is to have these validation steps and ensure that all stakeholders understand what we are talking about. It’s quite challenging because technical teams tend to use descriptions that are too technical, but we work together to simplify and it’s part of the awareness. We need to simply explain what the potential of AI is and what are the risks. We ensure that we identify the potential risks for the AI applications that we use in our daily work. 

Educating and upskilling on AI risks 

Gosia: I can imagine that also some additional activities that we have implemented in the company are helping in these kinds of discussions, like any kind of AI literacy programs that we implemented in the organization. Do you see that this is really helping, that we speak the same language? Even if you have non-technical teams involved in the application development, they start to speak the same language, they start to understand exactly what are the limitations, what are the benefits of AI? 

Ania: Yes, it’s important to not focus on policies, processes, and technology while neglecting people, education, and culture, which are really important. Without driving this cultural change, the responsible AI framework and initiatives will be ineffective. We need to drive this cultural change, and people across Schneider Electric need to understand why responsible AI matters for them, why they should care about the potential risks of AI, and how they can do the right thing. Our role as a responsible AI team is to educate, upskill, and empower our colleagues on how to develop and use AI responsibly. It’s our top priority. We have several programs: training for technical teams, for business, for leaders, and for all employees. We also put in place awareness sessions and coffee discussions and work to create playbooks on responsible AI and some cheat sheets and guidelines to ensure that everyone has the right knowledge and information about AI and AI risks.

Emerging AI challenges 

Gosia: Yeah, I can imagine that this is really helping a lot. If we look ahead, what emerging responsible AI challenges will require even greater industry-wide collaboration? You mentioned a bit about involving different external teams as well, discussing and sharing about the standards and regulations. If you look at the current landscape of AI systems, new AI models, what is the emerging AI challenge that you see? 

Ania: AI is evolving rapidly, the regulatory landscape as well. We need to engage and continuously watch what is going on and what are the new technologies. Are there any additional risks that we don’t tackle? Engaging with stakeholders and our peers, especially because they have the same challenges and are working on the same AI systems, is really important and crucial because we are here to exchange and learn together. Engaging with the standards entities and bodies is really important because we can understand and interpret the regulations and understand what are the areas. But we need to translate them into concrete actions to operationalize effectively. We need to work together. We have some engagement like Impact AI, which is a nonprofit organization where we are engaged with big tech companies and other companies working in the same or different sectors to share experiences and knowledge and create these guidelines. At the end, the goal is to ensure that AI is used and developed effectively. It’s really important to work together. 

Closing the gender gap in AI 

Gosia: While listening to you, I imagine that your work is very meaningful and I can hear the passion in you for these topics. My final question would be, looking at your AI journey so far, what has been the most meaningful to you personally? 

Ania: Throughout my journey, I became increasingly convinced that responsible AI requires diverse perspectives. That’s why I’m teaching AI ethics and co-leading Women in AI, which is really important for me. These are opportunities to both share what I’ve learned so far and what I’m still learning and learn from others with different experiences. The gender gap in AI is not just an equality issue, it’s a quality issue for the systems that we build. It’s really important for me to continuously involve women and have more women in AI and diversity to ensure that we have fair, transparent, and inclusive AI systems. That’s the main challenge. We have challenges around understanding and implementing the regulatory obligations and working together with our peers to really understand and translate them into concrete actions. But the one challenge that I’m really working on is diversity—how to ensure diversity and close the gender gap in AI. 

Gosia: OK, thank you so much, Ania. I wish you a lot of success in your mission. I think it’s really worth following and supporting. We all need AI systems that are responsible, that are ethical, and we need diverse teams to be able to build that. Thank you so much for this conversation. It was very insightful. 

Ania: Thank you, Gosia. Thank you for having me. 

Like what you hear?

AI at Scale Schneider Electric podcast series continues!

The first Schneider Electric podcast dedicated only to artificial intelligence is available on all streaming platforms. The AI at Scale podcast invites AI practitioners and AI experts to share their experiences, insights, and AI success stories. Through casual conversations, the show provides answers to questions such as: How do I implement AI successfully and sustainably? How do I make a real impact with AI? The AI at Scale podcast features real AI solutions and innovations and offers a sneak peek into the future.  

AI at Scale podcast at Schneider Electric

Tags: , , , , ,

Add a comment

All fields are required.