Cyber governance and AI: Balancing opportunity and risk through trustworthy AI

“Yesterday’s disruption is tomorrow’s opportunity” is a statement from Gartner, Inc.’s recent report, AI in Cybersecurity: Define Your Direction. The report shares insights into how companies can minimize disruption while managing risk and harnessing the value of artificial intelligence (AI).   

The concept of turning yesterday’s disruption into tomorrow’s opportunity is at the forefront of Schneider Electric’s approach to AI. We have a long, trusted history of working with customers in critical infrastructure industries where advances in AI are critical for sustaining an innovative and competitive edge. Our goal is to help our customers transform their operational (OT) environments with secure AI offers that minimize disruption and risk while enriching value.  

We are also cognizant that OT systems in critical infrastructure industries have become a more frequent target for cyberattacks. In fact, research shows that since 2023 more than 50% of all incidents per year targeted critical infrastructure. It’s evident that malicious actors have discovered they can cause disruption of operations, physical damage, and financial losses when cybersecurity is lapse or insufficient.  

Enhancing traditional cybersecurity with responsible, secure AI  

Regardless of the technologies Schneider Electric embeds in its offers, we are committed to protecting customers through our cybersecurity governance initiatives, and our AI solutions are no different. Built across the value chain, our approach to AI cybersecurity is based on industry standards for the designing and manufacturing of secure products and the safeguarding of the integrity of critical systems and services. It includes security processes and methodologies such as Secure by Design at the product development stage, all the way through to Secure by Operations for the ongoing maintenance and oversight of deployed technologies throughout their lifecycles. 

Because we recognize that emerging technologies like AI bring new risks to our customers, we have introduced additional measures to help our customers to use AI technology in a way that can be trustworthy and secure. They include a commitment to responsible AI (RAI) that is supported by executive oversight, internal governance and policies, and ongoing risk management practices, as well as a human-oversight focus. 

The added value of a responsible, risk-management driven approach 

Schneider Electric’s AI initiatives adhere to the ethics and compliance as described in our Trust Charter. We aim to create and use trustworthy AI solutions by addressing environmental, ethical, societal, and technical issues such as sustainable impact, bias, robustness, transparency, and data protection.  

As part of this commitment, we have developed a Responsible AI (RAI) Strategy, which aligns with the National Institute of Standards (NIST) AI Risk Management Framework and includes six guiding principles:  

  1. Sustainable AI: This principle focuses on the carbon impact of the AI solutions we develop and deploy and their overall impact on sustainability and energy efficiency.  
  2. Accuracy and robustness: We also want to ensure that our usage of AI and AI models provides accurate results, with systems that can perform correctly for their intended purposes and handle real world data.  
  3. Human centric and fair: The goal of this principle is to avoid any bias in the data and to be fair to the people using AI systems.  
  4. Data governance and privacy: Here the focus is on ethically managing the vast amounts of data involved in AI while adhering to compliance and legal guidelines.  
  5. Transparency and explainability: We also want to adopt AI in a way that can be transparent and explainable as much as possible 
  6. Accountability: Key to our strategy is the ability to identify and manage the risks and benefits involved in our usage of AI and the cybersecurity exposure of the company.  

            To support the implementation of our RAI strategy in alignment with our AI cybersecurity governance, we have added the following layers to further help ensure our offers are trustworthy and secure.  

            Global executive oversight: From a cyber governance perspective, all our AI activities are reviewed by an executive RAI Committee. The committee includes senior C-level officers that oversee the management of AI, products and security, data, legal, and compliance. Collectively, these officers help ensure that we apply our trustworthy AI to use cases, identify risks, and comply with regulations. They also oversee our regulatory landscape, compliance, and ethics as they relate to our critical AI use cases.  

            Internal governance and policies: As part of our AI governance structure, we have a RAI Office which includes subject matter experts, line of business executives, and legal and compliance managers. This office develops internal cyber governance and processes and defines and enforces principles for the responsible use of AI within the company. The people in this office ensure our AI usage aligns with Schneider Electric’ values and goals. The office is also responsible for building a cyber-aware AI culture through various levels of mandatory and voluntary training programs within the company.  

            Ongoing risk management and mitigation: We also have an AI Risk and RAI team, which includes digital risk, RAI, and cybersecurity leaders who manage the day-to-day use cases. This team defines, implements, and executes AI regulation and RAI standards. The team implements risk management processes through an AI risk assessment framework.  

            The assessment ensures our uses cases align with the six governing principles in our RAI strategy and comply with industry standards and regulations, including the European Union’s Artificial Intelligence (EU AI) Act. Through the assessment, we identify if there are any potential risks in our use of AI in customer solutions. Once identified, we then use the standard cybersecurity controls we have in place to mitigate the risks before deploying an AI-embedded solution to our customers.  

            Driving value through a balanced approach 

            Like our approach to other areas of cyber governance, Schneider Electric’s AI strategy balances trustworthy and responsible AI with innovation at scale. Through our robust approach to AI cybersecurity governance, we intend to help our customers – and our business – safely and securely benefit from AI with industry leading innovation while minimizing the risk.  

            To learn more about our AI approach and how our company uses AI, you can: 

            Add a comment

            All fields are required.