Nobody Should Blindly Trust AI. Here’s What We Can Do Instead

This audio was created using Microsoft Azure Speech Services.

Artificial intelligence. Years from now someone will write a monumental book on the history of it. I’m pretty sure that in that book, the early 2020s will be described as a pivotal period. Today, we are still not getting much closer to Artificial General Intelligence (AGI). But we are already very close to applying AI in all fields of human activity, at an unprecedented scale.

It may now feel like we’re living in an “endless summer” of AI breakthroughs, but with amazing capabilities comes great responsibility. And discussion is heating up around ethical, responsible, and trustworthy AI.

The epic failures of artificial intelligence, like the inability of image recognition software to reliably distinguish a chihuahua from a muffin, illustrate the persistent shortcomings. Likewise, more serious examples of biased hiring recommendations are not warming up the image of AI as trusted advisor. How can we trust AI in these circumstances?

The foundation of trust

On one hand, creating AI solutions follows the same process as creating other digital products. The foundation is to manage risks, ensure cybersecurity, assure legal compliance and data protection.

In this sense, three dimensions influence the way that we develop and use artificial intelligence at Schneider Electric:

1) Compliance with laws and standards. Our Vulnerability Handling & Coordinated Disclosure Policy which addresses cybersecurity vulnerabilities and targets compliance with ISO/IEC 29147 and ISO/IEC 30111. The new responsible AI standards are still under development and we actively contribute to fully comply with them.

2) Our ethical code of conduct, expressed in our Trust Charter. We want trust to power all our relationships in a meaningful, inclusive, and positive way. Our strong focus and commitment to sustainability translates into AI-enabled solutions accelerating decarbonization and optimizing energy usage. We also adopt frugal AI. We thrive to lower the carbon footprint of machine learning by designing AI models that require less energy. 

3) Our internal governance policies and processes. For instance, we have appointed a Digital Risk Leader & Data Officer, dedicated to our AI projects. We launched a Responsible AI (RAI) workgroup focused on frameworks and legislation in the field. This consists of the European Commission’s AI Act or the American Algorithmic Accountability Act. We deliberately choose not to launch projects raising the highest ethical concerns.

How hard is it to trust AI?

On the other hand, the changing nature of the applicative context, the possible imbalance in available data causing bias, and the need to back up the results with explanations, are adding an additional trust complexity for AI usage.

Let’s consider some pitfalls around Machine Learning (ML). Even though the risks can be similar to other digital initiatives, they usually scale widely and are more difficult to mitigate due to an increased complexity of systems. They require additional traceability and can be more difficult to explain.

There are two crucial elements to overcome these challenges and build trustworthy AI:

1) Domain knowledge combined with AI expertise

AI experts and data scientists are often at the forefront of ethical decision-making. They detect bias, build feedback loops, and run anomaly detection to avoid data poisoning. All to control applications that may have far reaching consequences for humans. They should not be left alone in this critical endeavor.

To select a valuable use case, choose and clean the data, test the model, and control its behavior, you need both data scientists and domain experts.

For example, take the task of predicting the weekly HVAC (Heating, Ventilation, and Air Conditioning) energy consumption of an office building. The combined expertise of data scientists and field experts enables the selection of key features in designing relevant algorithms, such as the impact of outside temperatures on different days of the week (a cold Sunday has a different effect than a cold Monday). This approach ensures a more accurate forecasting model and provides explanations for consumption patterns.

Therefore, if unusual conditions occur, user-validated suggestions for relearning can be incorporated to improve system behavior and avoid models biased with overrepresented data. Domain expert’s input is key for explainability and bias avoidance.

Domain experts working with data scientist can ensure the success of AI projects and trustworthy AI

2) Risk anticipation

Most of current AI regulation is applying the risk-based approach, for a reason. AI projects need strong risk management, and anticipating risk must start at the design phase. This involves predicting different issues that can occur due to erroneous or unusual data, cyberattacks, etc., and theorizing their potential consequences. This enables practitioners to implement additional actions to mitigate such risks, like improving the data sets used for training the AI model, detecting data drifts (unusual data evolutions at run time), implementing guardrails for the AI, and, crucially, ensuring a human user is in the loop whenever confidence in the result falls below a given threshold.

The journey to responsible AI focused on sustainability

So, is responsible AI lagging behind the pace of technological breakthroughs? In answering this, I would echo recent research by MIT Sloan Management Review, which concluded: “To be a responsible AI leader, focus on being responsible”.

We cannot trust AI blindly. Instead, companies can choose to work with trustworthy artificial intelligence providers with domain knowledge who deliver reliable AI solutions while ensuring the highest ethical, data privacy and cybersecurity standards.

As a company that has been developing solutions for clients in critical infrastructure, national electrical grids, nuclear plants, hospitals, water treatment utilities, and more, we know how important trust is. We see no other way than developing AI in the same responsible manner that ensures security, efficacy, reliability, fairness (or the flipside of bias), explainability, and privacy for our customers.

In the end, only trustworthy people and companies can develop trustworthy AI.

Welcome to AI Hub Schneider Electric

Want to know more about AI at Schneider Electric? Visit se.com/ai.

Tags: , , , , ,

Add a comment

All fields are required.

This site uses Akismet to reduce spam. Learn how your comment data is processed.