Building AI with human agency at the center
What if AI could do more than just make decisions? What if it could make the right ones? In this episode, Hamilton Mann, Group Vice President, Global Digital and AI Transformation for Strategy, Marketing and Sales, Thales, and author of Artificial Integrity book, invites us to rethink the very foundation of artificial intelligence. Instead of chasing ever-greater computational power, he proposes a paradigm shift: designing AI systems that are not only intelligent but also guided by integrity, baked into their code.
Drawing from his research and leadership in digital transformation, Mann introduces the concept of “artificial integrity”, a framework for building AI that reflects ethical reasoning and human values at its core. This conversation explores how such systems could transform industries by prioritizing fairness, transparency, and long-term societal impact. Furthermore, Hamilton shows how integrity-led AI can help businesses not only perform better, but lead more responsibly in a rapidly evolving digital world.

Listen to the AI at Scale podcast
Listen to Hamilton Mann: Artificial integrity episode. Subscribe to the show on the preferred streaming platform (Apple Podcasts, Spotify) or play the episode on youtube.
Transcript
Gosia Gorska: Welcome back. This is the Schneider Electric AI at Scale podcast.
My name is Gosia Gorska, and today we are discussing the concept of AI integrity and how to build systems that not only deliver results, but also stay aligned with our values. Let’s welcome Hamilton Mann, a Tech Executive Group Vice President in charge of digital and AI transformation, encompassing strategy, marketing, and sales across defense, aerospace, and cyber at Thales and of course, a bestselling author.
Hamilton Mann: Hi, thank you very much for having me.
Integrity: the missing benchmark in AI
Gosia: To introduce you properly, I also want to highlight that you lecture at INSEAD and HEC Paris. You conduct doctoral research in AI at École des Ponts et Chaussées, are a senior fellow at the ReTech Center of the École des Ponts Business School, and a mentor at the MIT Pristina King Grey Center. You’re also the originator of the concept of “artificial integrity,” a paradigm-shifting framework that redefines AI system design to build integrity-driven rather than intelligence-led machines. And of course, you’re the author of the bestselling book Artificial Integrity, published by Wiley. So jumping right into the first question: What do you mean by artificial integrity, and why is this concept critical in today’s AI landscape?
Hamilton: Yes. So thank you for this great question because this is the very first time that I have that question. No, I’m joking.
Gosia: I can imagine.
Hamilton: I usually start, let’s say answering that question by coding Warren Buffett, right? And Warren Buffett famously said in looking for people to hire, you really want to look at trick qualities, integrity, intelligence and energy. And if they don’t have the first, the other two may kill you. And I think this principle is very much equally applies to many of those AI systems we use, whether it is for running our businesses or to make our living. Because even though they are not human, those AI systems, they absolutely have intelligence and they do have energy, right. So the question is, how do we ensure that those AI systems, we hire right in a way as participant in societies, in our many organization, no matter let’s say you know, the settings, public, private and so on and so forth, non for profit. How do we ensure that those system are capable of something akin to what we call integrity? And so from the research that I’m connecting, right, I’ve came to a kind of a, let’s say a few conclusions. But definitely the difference between those systems that does not demonstrate this quality of integrity and the other is quite simple in a way. The former are designed it because we could and the letter are designed because we should, right? This is quite very much the difference. So artificial integrity is just that. It is about advancing AI developments so that AI systems could be capable of mimicking integrity over intelligence to exhibit ethical, moral, and social reasoning. And let’s say that ethical AI framework, I think it is first to say that they are often designed as external governance or oversight layers rather than integrating ethical reasoning capacities within the AI system themselves. So artificial integrity is very much about ingraining the very capability of AI systems to be guided by inherent mechanism and framework that ensures consistently integrity driven functioning in alignment with the human values, right. So I think this is very or so let’s say concerning because you know, you’ve seen maybe so far that current frontier AI models as we call it very much emphasize expanding technical capabilities in pursuit of high performance on benchmark tests and doing so they often do that at the expense of human centered consideration, right. And it is at the point that in fact these models are not crash tested against benchmark that evaluate their intrinsic integrity LED functioning. And for a simple reason because those type of benchmark is not quite very much existing so far or if there is some kind of a few initiatives that has not been taking place at a kind of a momentum that can be a kind of a stand out nowadays. So artificial integrity is very much founded on that vision that very much the next frontier of AI lies not in developing increasingly sophisticated or powerful models, but very much in embedding integrity at the core of their design. So it is about ensuring that we can develop AI systems acting in ways to uphold trust, reflect as I was saying, societal values and very much contribute to let’s say you know, kind of a constructive way of living as a human communities because they will be capable of as close as possible, I would say, demonstrating integrity LED behaviour by design.
Beyond guardrails: designing AI to do the right thing
Gosia: Yes. And it’s really a fascinating topic because at first you are touching on something that is preoccupying so many people that AI will not be connected with the human values. This is 1st and secondly, it’s also fascinating because we hear so much about building guardrails around this AI systems, but exactly you are asking the question what if we build AI in a different way so it demonstrates not only this high intelligence but also integrity. So my next question would be how does the system look like when it demonstrates integrity and intelligence? Could you share a real-world example or scenario?
Hamilton: Yeah. So first I need to disclaim the fact that this is, very much an area of research. And so we still have to learn. We still have to, let’s say, you know, build such a kind of a systems to grow in the maturity that will help us or make us able to claim, oh, now we are reaching a point where we have system able of artificial integrity. So this is going to be a journey, maybe the same kind of a journey that we’ve been living from the very early days of AI to the days that we are leaving today. But I can tell that even though we are at the very beginning of that journey, there is some kind of indication that can help us to envision what is going to be those artificial integrity systems. And let’s take for example, the recruitment, right, those type of AI tool that we can use for recruiting people. If we look at the way the kind of artificial integrity systems for such a kind of A use case could work, we could expect that they will be very much proactively addressing the potential biases and evaluate the fairness of their outcomes, right and making or helping to make fair hiring recommendation and decision. So this is, you know, one thing that will be very much at stake and at the hurt right of processes because it have a very, let’s say you know, butterfly impact in societies. In the case of insurance claim for the example, artificial integrity systems would be considering the furnace for example of its risk assessment, right, treating clients equitably with analysis of claims outcomes to refine future assessments, for example. Because here again, you are very much in the sectors that really have a large and wide impact in societies. So integrity is very much key, right? We can also think about, you know, application in supply chain for example, where artificial integrity systems will prioritize suppliers will meet ethical label standards and environmental sustainability criteria even if they are not the lowest cost option. I would say in conducting evaluation and sourcing decision and tracking these outcomes as well to assess long term over short term sustainability impact, right? And I mean, in many ways, we very much are in a kind of a situation and moment where we do have very critical sectors where integrity in AI systems is key because it is life altering if we are not degrading those systems. It also make me think for example, to the case that is increasingly and intimately pervasive in our daily lives because it is very much an area that concern all of us as well, which is the content moderation and recommendation. We are living in that social network era. And we all know now that you know, in those different social networks, you do have AI right at play since a long time ago and before let’s say any kind of AI shot. But and artificial integrity in such a system will be very much being private as user safety over engagement metrics, which is kind of a trap that we see today, right? And they will preemptively in a way filter content that could be harmful or misleading, continually learning from flagged removed content to improve their ethical filtering, and privatizing the consumer Wellness and well-being in a way because you know what is at stake. There is very much about how to avoid algorithmic techniques that manipulate the brain reward systems, right? Especially when you look at the different audiences that are vulnerable. I’m thinking about children, for example. So all of this is some of the illustration where you can see artificial integrity systems making kind of a direction and the kind of a progress that we need when it comes to envisioning AIC standard societies. And this is also an illustration that intelligence alone, which means without being guided by ethical, moral or social framework can easily stray, of course risking harm and of course risking and intended consequences, right? And this is absolutely what we want to avoid. So bottom line, to cope with this, this is quite the reason why integrity within AI systems should be a must and should be kind of a area of improvement on which we need to continue to invest. Because I mean, we know for sure that intelligence is absolutely not a sufficient condition for answering or guaranteeing a good in societies.
Train AI to reason, not just react
Gosia: Yes, I definitely see the need for artificial integrity. I can imagine it’s possible and how it could work. But then let’s talk about the motivation or the things that need to happen to make it possible. So what actions could be envisioned to set AI development on the path towards integrity alignment?
Hamilton: Thank you for that question because it touched, the kind of a hard point of my current research. There is many aspect that touch the topic of integrity within AI systems. And I would say this is a very interdisciplinary topic in a way. But let’s maybe deep dive on one point specifically. During my research, there is very much one essential element that comes out as a kind of a critical priorities in a way which is about building AI models considering the data process in a kind of a certain way. So first, beyond labelling the data, which generally refers to the process of identifying and assigning a predefined category to a piece of data, I think it is necessary to adopt the practice of annotating the data set in a kind of a systematic manner. And why the labelling data give it a kind of a form of identification so that the system can recognize it. The process of annotating is kind of a, let’s say, go a bit further because it allows for the addition of more details and extensive information than simple labelling cannot allow to us. And the data annotations give the data a form of abstract meaning, right? So that the systems can somehow contextualize the information. So including annotations that very much characterize a kind of a integrity code reflecting values, integral judgements regarding these values, principles underlying then or outcomes to be considerate, inappropriate relative to a given value model. It is a kind of a promising approach to train AI not only to be intelligent, but also to be capable of producing results guided by integrity to a given value model. And for example, in a data set used to train an AI customer service chat bot annotations process could include evaluation on integrity with respect to the value model reference set, ensuring that the chat bots’ response will be based on furnace, for example. And training data could also include a notation about ethical decision making in critical scenarios or ensure that is used ethically restricting privacy and consent. And so those kinds of things can be one of the peace rights to explore when it comes to, let’s say draining integrity within the IE systems. Maybe another essential element when it comes to the artificial integrity building is let’s say, you know, making the AI model capable of displaying features that is very much, let’s say, you know, construct base and make possible through the training methods, right? The AI trained using, for example, supervised learning techniques that allow the model to learn not only to perform a task, but also to recognize integrity laid and prefer outcomes is also, I think, a promising path for the development of artificial integrity. It is also conceivable to add information about the value model used to train a given AI model through that rotation again, and then use supervised learning to help the AI model understand what does and does not fit the value model right. So, for example, regarding AI models that can be used to create deep fakes, the ability to help the system understand that certain use indicates deep faking and do not match the value models right will be demonstrating artificial integrity at play, right in a way. And a complementary approach to that is, for me, the design systems where human feedback is very much integrated directly into the AI model learning process through what we call the reinforcement learning methods. This could involve humans reviewing, adjusting the AI’s decisions, for example, effectively training the AI model on more nuanced aspect of human values that are, let’s say, very kind of a difficult to capture with data and annotation alone, right? And so especially when it comes to foundational AI models, by definition, they are used in many countries around the world. User across the different countries should have the opportunities to express their feedback on whether the model aligns with their values. So the systems can very much continue to learn how to adapt to different value models that he’d impact in a way. So of course there is measure, let’s say you know, difficulties and pitfalls because there is a kind of let’s say element of subjectivity when it comes to values. And also, you need to consider different cultures, communities and individuals when it comes to that definition of values that may varies of course in terms of perspective. So this is of course something that makes the points quite complex in that, but it also reinforced the fact that the path to our artificial integrity systems also is a kind of an invitations to a more, let’s say, you know, open ways of building AI models opens in the sense at least of having the participation of the users that are directly impacted by the systems at the end, right? So, and I think finally, maybe the points that can be also it’s a mention about this, is that developing artificial integrity systems is very much requiring what I was explaining earlier, which is the interdisciplinary collaboration, right? And to me, this is not something to be taken as an option. This is not something to be taken as a nice to have or not to be, let’s say, taken also as a kind of afterthought. This is very much something that need to be very much at the beginning. So the AI models and the way we train it and the way we design it and the way that say, you know, participant and the stakeholder that our plans to be the users are involved is very well thought in the very beginning throughout the process of building that AI models. This is quite not, let’s say the norms nowadays in the way we build those systems, but I think this is a kind of a promising path in that’s a building integrity within those systems.
Co-Intelligence
Gosia: Yes, I agree. And we already discussed it in previous episodes that AI really requires a larger and higher involvement of audiences of the end users who are using the systems because this is affecting them in a way that is different to compare with any other systems that we were using before any other technology. And as you mentioned about this cooperation between people, so end users and the system, it brings us to the concept of human AI Co-intelligence that you also explain in your book. So could you walk us through each stage and explain how this four different stages of human AI Co-intelligence, how they reflect the changing dynamics between humans and AI?
Hamilton: Yeah, I think. In the kind of a framing of a conceptual framework, which is also at the earth of my current research, I’ve been defining, let’s say that kind of ways we should envision the functioning of AI systems from a kind of AI and human intelligence collaboration. And Long story short, I think there is 4 mode that are very important to take into account, very critical in the way we, let’s say, you know, can envision collaboration with AI systems at scale that are very much, let’s say, setting the equation relative to human intelligence and AI intelligence at different degree in a way of, let’s say, the need of those intelligence. And let me break it down. The first mode is what I call the marginal mode. And the marginal mode is the mode where you don’t need that much AI to achieve your task and you don’t need that much human intelligence either. And on that mode, what is critical is very much to have the appropriate functioning of, let’s say, any AI systems, if it exists, so that it doesn’t create any burden over things overall, let’s say, you know, outcomes or delivery that we don’t need. And I would say even further, making sure that we don’t put AI where we don’t need at all. But at the same times, in that very same, let’s say, space, we also have a kind of a very critical questions when it comes to making sure that we are driving the right way. Let’s say human communities and human societies as a whole. Which is very much about how we deal with transitions where we see at the very moment that some of the task, some of the processes or some of, let’s say, the job that we’re doing are very much living. They are very last days, weeks, years of existence, right? Which means that any form of a work that we are experiencing do have a kind of expiration date. And so let’s not be, let’s say, naive thinking that anything is living in a fixed pie and making the work of reflecting about the transitions. Transitions means that the work may evolve. Transitions means that the work may be useless. Transitions made that the work needs to be made, reshape. It means many things, but it also means that we need to, let’s say, you know, anticipate that adaptation. So this is very much where the marginal mode is living and the AI human collaboration in that mode set very, let’s say, specific questions to what let’s say the way we should collaborate and the way we need to envision the collaboration. And it starts also by the fact of acknowledging that sometimes we don’t need such a collaboration. And if we don’t need it, we need to, let’s say, pay attention of not making a kind of a race for AI just for the sake of AI because it is not something that will be, let’s say, you know, generating value at the end. The second mode is very much about what I’m calling AI 1st. And AI first is basically where you very much need AI. You very much need artificial intelligence that very much precedes human intelligence for achieving a given task, for achieving a given goals, because AI is, let’s say, a kind of a unbeatable way, right, to reach a certain point of performance. And human intelligence will not be able to help you to reach that. OK, So in that AI first, let’s say a space, the collaboration is very much about how do we acknowledge the precedent that AI takes to human intelligence. What does it mean in terms of, let’s say gadgets, controls, mechanisms that need to be in place in such a systems that you know, behave and function in that space? AI first, because of course one critical aspect on that is that we are very much in the area where systems are operating with the IS autonomy, OK. And also of course, in some aspects behaving and generating and let’s say having some mechanisms that generates some, decisions in a way to, to let’s say, you know, execute some of the action and the functionings. That’s your pace of cognitive capability, right? So, so the AI 1st mode is also a specific area where the collaboration with the human intelligence raise new form of questions, but also new forms of, let’s say, you know, responsibilities and new form of let’s say, if not new, actually reinforce aspect in terms of design and in terms of integrity, of course, of the system. Because again, they are at the age of the autonomous decision making. The third mode is what I’m calling human 1st and the human 1st a bit by opposition to the AI 1st is let’s say you know, where you need to have human intelligence that surpass AI. This is very much styling, but acknowledging the fact that in some aspects and many actually you will not be replacing the human intelligence with AI. And it may concerns many tasks, many jobs, many things that we’re doing in life, right in that area. This is very much critical also to think about the mechanisms that will be fostering and not undermining human intelligence over AI, because this is crucial that we are very much using at its full pledge or using at its very natural aspect or very own human intelligence to do what needs to be done in that space. And I’m thinking, for example, about, you know, the skills of empathy, for example, that, you know, being are very much able to have and that even though some it’s AAI chatbot can mimic, I mean, you will not replace the eyes, the touch, the feelings, the sentiment that you have in front of you, of the, of someone that care about you, right? So the human 1st is also calling for a specific and scrutiny, let’s say you know, ways and aspect of designing and developing and envisioning the collaboration with AI in any setting in any organisations. Because it is very much for all of us in a way, an invitation or so to not falling in the trap of, you know, the buzzwords of AI and put at the centres, right, What make us human right. And the last mode is what I’m calling the fusion mode and the fusion mode is the fusion of human intelligence and AI. So this is very much where we will need, let’s say the best of both world, let’s put it this way, in the form of a egradation to achieve certain goals and certain tasks. And so usually when I’m talking about the fusion modes, it ring the bells to the mind of people saying, Oh, this is, for example, what Elon Musk is doing with neural ink and those and and those things. And yes, this is yes, this is this is such a kind of aspect, right? We are very much about, you know, that’s fair where the ebridation can goes at that, let’s say, you know, level right. And it of course touches a lot of questions not to mention the question about the anthropological question that it touch right in a way. But even though we are not looking that far, it is quite needed for fighting some of those diseases, right? That we, let’s say, you know, don’t know how to fight today, right. We can imagine having systems that are, let’s say, microscopic and living in our, let’s say, organisms to help our antibodies fight some specific diseases that we know today that we are not well equipped to fight against that. And we can also, let’s say, imagine all the progress that we can make when it comes to the bionic type of replacement of some of the, you know, part of our bodies where, let’s say, you know, it happens to, to anyone having an accident, etcetera, of life, losing my legs, lick my arm or whatsoever. And, and, and still being able to continue to live with some kind of an intelligence bionic, you know, so-called artificial organisms, right? So, so the fusion mode is also a specific, let’s say, you know, space very interesting to explore when it comes to the AI systems and machine collaborating with human intelligence and those 4 modes. And this is where let’s say, you know, the, the, let’s say the, the landscape of possibilities explode in a way depending on the way we collaborate and the way we need to collaborate will be, let’s say, you know, transitioning from one mode to another depending on what we need to achieve, depending on let’s say, you know, what is the, say the circumstances and so on and so forth. And to me, this is also a very key aspect of artificial integrity, which is about having that kind of an ethical, moral and social intelligence in grading in that AI systems, not only to consider those four mode, but also to be able to transition from one mode to another’s. Considering the very specific collaboration that is at play from the user perspective and from the human perspective. And at each time of the process. And the functioning being very much well aligned with human values, with ethical norms, and with all those aspects that let’s say, you know, foster integrity LED functioning in the systems.
AI errors, human insight
Gosia: Yes, that’s really fascinating. And I enjoy so much this conversation. I think this is really thought provoking. And I’m sure that our audience is now ready to go and read more in your book to discover the full concept. So thank you, Hamilton, so much for this conversation. If there is like one last message that you would like to share with our audience, with people who are looking into developing into applying AI, what would it be?
Hamilton: One last point I will say is that I mean, we need to continue to invest, of course, not only on AI, but most importantly on our human intelligence. I think this is a, this is not making the, let’s say the, the headlines nowadays compared to AI, but this is critical. So we need to continue to invest in our human intelligence because in that era of, you know, artificial intelligence and human intelligence collaboration that we was talking about, there is also a trap on which we, of course, do not want to fall, which is about, you know, mistaking, let’s say misinterpretation coming from the AI generated the outcome, mistaking misinterpretation for mastery, right? And so we’ve seen that, you know, AI systems is not, let’s say, you know, immune to error. We sometimes call that hallucination and human intelligence is very much our best gauge roles, rights to make sure that we’re not falling in that trap of mystically misinformation and even more misinterpretation for mastery.
Gosia: So I’m really looking forward to hearing more about the research, about the next steps. And I’m, I’m sure that we will meet again to follow up on this conversation. So thank you so much for your time, Hamilton. It was a pleasure to speak with you today.
Hamilton: Thank you so much, Gosia. It was my pleasure to be there. Thank you for having me.
Like what you hear?
- Visit our AI at Scale website to discover how do we transform energy management and industrial automation with artificial intelligence technologies to create more sustainable solutions.
- Listen to the other episodes of the AI at Scale podcast.
AI at Scale Schneider Electric podcast series continues!
The first Schneider Electric podcast dedicated only to artificial intelligence is available on all streaming platforms. The AI at Scale podcast invites AI practitioners and AI experts to share their experiences, insights, and AI success stories. Through casual conversations, the show provides answers to questions such as: How do I implement AI successfully and sustainably? How do I make a real impact with AI? The AI at Scale podcast features real AI solutions and innovations and offers a sneak peek into the future.

Add a comment