[Podcast] The human impact of algorithmic systems

What every leader must understand about AI’s impact

As artificial intelligence becomes embedded in everything from healthcare to hiring, the decisions suggested by AI systems are increasingly shaping human lives. But who defines the rules behind those decisions and what happens when they go wrong?

In this episode of AI at Scale podcast, Renée Cummings, criminologist, data activist, and Professor of Practice in Data Science at the University of Virginia, offers a powerful perspective on the real-world consequences of algorithmic systems. Drawing from her work in criminal justice and public policy, she explores how data-driven technologies can either reinforce systemic bias or drive meaningful change.

Renée introduces the concept of competitive vigilance, urging leaders to move beyond compliance and embrace a deeper understanding of how data and algorithms influence autonomy, equity, and opportunity.

She also shares frameworks for building more inclusive, and justice-oriented AI systems, ones that empower rather than undermine.

This conversation is a must-read for executives, technologists, and policymakers who want to lead responsibly in an AI-driven world.

This episode was published in May, but we want to remind this important topic while we work on new episodes and enjoy summer holidays. We hope you enjoy this time too!

Listen to the AI at Scale podcast

Listen to Renée Cummings: Designing AI systems for high-stakes decisions episode. Subscribe to the show on the preferred streaming platform (Apple PodcastsSpotify) or play the episode on youtube.

Transcript

Gosia Gorska: Welcome everyone. This is Gosia Gorska, the host of the Schneider Electric AI at Scale podcast. Today I meet a very special guest, Renée Cummings, professor of practice in Data Science at the University of Virginia and a fault leader in AI and AI ethics. Welcome, Renée. 

Renée Cummings: Thank you so much for that wonderful introduction. 

Gosia: It’s an honor to meet you. 

Renée: Thank you, and let’s introduce you properly to our audience. So you are a criminologist and criminal psychologist and also an AI ethicist. What I’ve learned about you is that you are actually the first data activist in residence at the University of Virginia School of Data Science, where you were named Professor of practice in data Science. You are the Co-director of the Public Interest Technology University Network at University of Virginia and a community scholar of Columbia University. You are also a non-resident senior fellow at the Brookings Institution and the inaugural Senior fellow AI Data and Public Policy and All Tech is Human, a leading think tank. You’re also a distinguished member of the World Economic Forums Data Equity Council and the World Economic Forums AI Governance Alliance, very awarded recognized international consultant and advisor. What is the one thing that I didn’t mention, but it’s important to know you better, Renée? 

Renée: I think that would be how much I love to see people succeed and how much I love to assist people in achieving their dreams, their goals and desires. And how much I just love to celebrate people who are doing successful things and, and just ensuring that people just have the ability to live the life that they so desire. So I think that’s the one thing that most people may not know about me because it’s not on my resume, but I think my ability to celebrate others is really what’s my superpower.

Building accountable AI for high-stakes applications 

Gosia: OK, that’s fantastic to know. And I’m sure that our audience really appreciate  this positive energy and the information that you will be sharing. So let’s jump into our topic then. First question is I wanted to know what inspired you to focus on AI ethics and data justice, and how has your background in criminology influenced your work in this field? 

Renée: Well, that’s the most brilliant question because it’s also the easiest question. So I entered criminology with this desire to ensure that justice was served in real time. And it is that work in criminal justice and criminal psychology and therapeutic jurisprudence and just working with people who were dealing with so many systemic challenges. And of course, in the criminal justice system, we have been using algorithms for over 2 decades in the United States. So algorithms was not new to the criminal justice system. However, what I realize is that among the criminologist, among the criminal justice experts and many of the administrators working in the criminal justice system, we really did not understand the intricacy of an algorithm or this thing called a, a black box. So my own intellectual curiosity and a desire to do my job effectively and efficiently and to really bring the kind of service delivery excellence that I wanted to ensure my clients in the criminal justice system received, all of that made me drill intellectually drill deeper into the black box. And what I was realizing is here we were using data, really not understanding data intimately, but using data to make some very high-stakes decisions in the criminal justice system. And what were we doing with these algorithms? We were creating these zombie predictions in the United States that really overestimated the risks of many individuals. We also overestimated the risk that many individuals would have on public safety. And of course, in the criminal justice system, you’re dealing with things like rearrest, you’re dealing with things like reoffending, call recidivism. And I’m using these risk assessment tools, I’m using these algorithms to make these decisions. And something in my conscience is saying to me, this is not right. This is not the best result that we can get from technology. Why are we using new technology and coming up with the same old decisions? What are we not getting right? And it’s just that intellectual curiosity, that desire to do good for people, that desire to ensure although someone was facing a sentence or although someone was incarcerated, that individual still deserved equitable service. So it is that work as a criminologist that made me realize something wasn’t right with these algorithms. You know, the algorithms were behaving badly, you know, in the criminal justice system. And it’s always that desire to sort of right the wrong, to bring equitable justice and to ensure people live these really equitable sort of Fair, wholesome lives that led me to AI, led me to data activism, and that’s why I’m here today. 

Navigating ethical challenges in AI development 

Gosia: Yes, that’s wonderful. And indeed, you know, I was reading a book recently and there was one explanation that struck me. It was that probably we are looking so much into using AI technology because as humans we are just lazy. We are looking for the easiest solutions to solve some of the challenges, some of the problems. And now when you apply this laziness into the criminal justice into the, you know, some of the systems that are here to actually evaluate if you can get a credit or if you are maybe incarcerated. This is like, as you say, it’s such a high-stake that it really comes with very pressing ethical challenges. So I wanted to ask you, what do you see as exactly the most pressing ethical challenges in the development and deployment of AI technologies that are meant to help us, are meant to simplify. But at the end, are they really doing it or are they actually making things even more complex? 

Renée: So, I will say this, I’m very committed to AI. I have seen the brilliance of this technology. I’ve seen what is extraordinary about this technology. I’ve seen the ability of this technology to have a major social impact to impact just every sector, every discipline, every industry. So very committed to its development, very committed to looking at the ways in which AI is evolving from AI. We’ve had Gen AI, we have GenTech AI, now we’re talking about AGI. And I’m really committed to being part of the process every step of the way. But I’m also committed to ethical innovation, responsible AI, trustworthy AI, safe AI. These things are very crucial to the ways in which we build the technology because we should ensure at the foundation of all that we do, there is equity, there is fairness, there is justice. So when it comes to thinking about AI and what is the greatest ethical question that I think we’re challenged with at the moment, I would say it’s the concept of trust. How do we trust this technology? How do we trust the individuals who are entrusted with the responsibility of building this technology? And when we’re thinking about trust when it comes to AI, we’re also thinking about questions such as accountability, transparency, explainability. All of these are linked to how we trust the technology. Whether or not the technology is biased, whether or not it discriminates, whether or not the decisions that are being made are stereotypic or unfair, questions around privacy. Questions around how do we protect privacy, questions around how do we govern this technology all come back to the question of trust. And of course, the question of trust extends to public trust and public confidence in this technology. If it’s making decisions that are unethical or decisions that are unsafe or decisions that are unfair. So I think our greatest challenge with AI is trust. And we’re seeing that from a spectrum that goes from its ability at scale and speed to deploy disinformation to questions of deep fakes, to questions of how this technology could violate your civil rights and your human rights questions and how the technology can deploy trauma and impact people when we think about unconsensual sexual photographs and images that are deployed as well. So trust is the major ethical challenge of this technology. And if we can get trust correctly, then we can truly as a society, really benefit equitably from this extraordinary technology. 

Shaping AI policy and governance

Gosia: Yes, exactly. And you touch on a very important point, which is shaping AI policy and AI governance. So a lot of topics that are linked to trust could be probably solved through some AI policies and AI governance. What role do you believe that data activists or ethicists or people like you from also marrying different domains with AI and with data science, we should play in shaping AI governance? 

Renée: I think we have the most critical role to play because what we bring is an interdisciplinary thinking. And that is so critical to the work of AI when it comes to governance because we’re governing what is a sociotechnological system. So we’re governing a technology that’s having major societal impacts, that’s impacting every group, not only vulnerable groups. Because one of the things that I say, we are all part of vulnerable groups when it comes to this technology because we could become vulnerable in different ways. This technology may impact an individual looking for a job. It may impact an individual looking for a house, it may impact an individual looking for credit, it may impact an individual looking for peace, an individual, who is trying to seek a refugee status. You know, each of us could be impacted and become vulnerable when it comes to this technology. I think what data activism and data justice and AI governance and policy really drills into, is the fact that we have got to ensure when we are thinking about this technology and we are building this technology that we do not suppress things like agency and autonomy and self-determination and self-actualization and democracy. These are critical concepts that AI has the ability to sort of squash in real time. So when you think about AI policy and you think about AI governance, it’s so critical to ensure you have that interdisciplinary perspective, to ensure you have people with different experiences as part of the process as well as individuals who have been impacted by the technology to share some of those stories. Because to ensure that we shape the legislation in a way that it is effective and impactful in real time. A critical aspect of AI governance. And for me, what I’m very passionate about is that while we do have a preponderance of ethicists and sociologists and civil society at the table, we need to have more data scientists there. And as a faculty member of the School of Data Science at the University of Virginia, I’m in the process right now of standing up a global AI policy Lab in the School of Data Science where we could ensure data scientists from undergraduate straight to the PhD process are an important part of how we are thinking about the future of AI governance. Because let’s look at this, there is no AI without data. And if we are thinking about AI governance and the future of AI governance, that is linked to the future of data, so that what happens with data impacts AI governance in real time. So we’ve got to ensure data scientists are also a critical part of that when it comes to AI governance. It’s also about awareness, which is so very important. It’s about ensuring that the general public is also informed to be part of that dialogue. And really it’s about also building up civil society and stakeholders to ensure that their voice is just not a voice, but a voice that is active and a voice that is also knowledgeable.

AI’s role in transforming industries 

Gosia: Yes, exactly. And you know, actually this was one of the goals why we even launched the podcast. Our first objective was just to include more people in the conversation about what AI can do and what are the limitations. As a, Schneider Electric, we of course apply AI mainly in energy management, industrial automation context. So across our businesses, but it’s a technology that is evolving at such a fast pace. We see new innovations and new things coming almost every week. And, part of the discussion, part of applying in, in the business in the responsible way is to educate people and involve them in the conversation. So part of what you were saying, I completely agree with this. And we, we have seen it from the very beginning also with our podcast that this should be the goal, really include as many people as possible, make them aware and really deploy this AI systems in a secure, in a responsible way. And you already started to give us some examples. And I was also very curious to know from your perspective and from your experience, what has been some of the examples where you have seen AI that it has been used both maybe positively and some negative scenarios as well in real life and what lessons can be learned from these cases? 

Renée: I think I will go immediately to healthcare because I think we’re seeing some of the most extraordinary advancement in healthcare because of this technology in research, when we’re thinking about research and on what researchers are able to do in real time as opposed to things that probably took 10 years, maybe taking 10 weeks. So we’re seeing a real shaving down of time when it comes to every industry. When we think of Cancer Research, when we think of research in the brain sciences, when we think about just every kind of scientific research being impacted in a really positive way with this technology. When we think about manufacturing and communications and we think about just the ways in which this technology is being used in architecture, which to me is really brilliant. How we can deploy drones to really fix bridges and skyscrapers and how we can protect humans by putting this technology in the place of humans. How we could use it in places where there could be disease and bacteria and where you do not want human exposure. So there are so many ways. I think, as I said, every industry has been impacted in real time by this brilliant technology. Every business model is being reimagined and enhanced. Just think about what we can now in online shopping, what we’re able to do in entertainment. When we think about augmented reality and virtual reality and the many realities in which we can exist because of this technology, there’s no way we can underscore even more how brilliant this technology is. But we know there are challenges because the technology is built on data. And we know that historically data has not really lived up to its truest potential. We know that many of the data sets, many of the models that have been deployed are models that are steep, deep into bias and discrimination and systemic challenges and stereotypes, criminal justice, a critical place, health care, another place where the data sets may not be reflective of the realities of the ways in which we truly are. We know that when we think about financial services and when we think about home and real estate, we think about concepts such as digital redlining or algorithmic redlining and we see that this technology still has the ability to undermine the progress of certain communities. So we know there are major challenges. What we also know that these are also very high-stakes areas. So when we think about a technology that has the ability to undermine generational wealth or a technology has the ability to undermine safety and security of a particular group or particular community. Or a technology that misidentifies a particular individual or particular community. Or a technology that really creates the context where certain communities cannot thrive, where certain individuals cannot be themselves. We know that we have major challenges, but I think we also know what these challenges are. And if we are honest in our thinking and in our approaches, then we can do what is right to reduce and to mitigate those risks. And to, of course, use that kind of knowledge and that kind of feedback to enhance the models and ensure that the models that we’re deploying, the systems that we’re creating, the policies that we are crafting are done in ways that are more fair, more equitable, more inclusive, and more justice oriented and trauma informed. So there’s good work that we can do with AI. If we use it for good, if we use it responsibly, and if we use it with a conscience, and if we add heart and humility to the technology and empathy to the technology, then we can do right by society. We could do good for humanity with AI.

Gosia: Yes. And as you said, it really goes down to data. And I think you have some ideas about how organizations can ensure that AI systems are transparent, accountable, and fair, especially when they deal with historically biased data. Are there any frameworks or approaches that you would recommend? 

Renée: Of course, there are many frameworks that are out there. I think all major international organizations from the UN, we have the EU, we have World Economic Forum, we have just many major institutions and agencies all deploying frameworks. Some of the frameworks that I want to stand up would be some of the indigenous frameworks, the care approach to data. New Zealand has been doing some really exceptional work when it comes to thinking about indigenous practices and using those indigenous practices to ensure that our approaches to data stewardship and data governance and data protection really put people at the center, bringing that human-centric approach to what we’re doing. And of course, one of the things I like to say to companies is, if you don’t want to have a discussion about diversity, equity, and inclusion, let’s have a conversation about competitive vigilance. And if you want to ensure that you are deploying what is the most competitive large language model or the most competitive AI system, or the most competitive GPT or the most competitive AI business model, then you’ve got to have the vigilance that is required, the due diligence that’s required. You’ve got to ensure that you build duty of care into your models. And for you to do that, you have got to bring a sophisticated understanding of what those risks are and how do you mitigate those risks and how do you manage those risks and how do you set things up to detect those risks in real time. So competitive vigilance is critical to the design, development, and deployment of any AI system. So it’s really crucial for us to understand what data is and to not only understand what data is and about and how data really is shaping the rhythm of the world as it is today, but also understanding what data is not and what things we cannot do with data and how we cannot force data to perform. And how we have to understand that this thing that is so powerful is also so vulnerable and it needs to be protected and it needs to be cared for in a very particular way. So what I always call for is a very sophisticated and intimate understanding of data and understanding the challenges and understanding the great success stories you can tell with data. Data is about intelligent decision making. Data is also about imagination and it’s about foresight and the ability to predict and the ability to design and develop extraordinary things with data. But we’ve got to get our data right if we want to get our AI right.

Advice for aspiring AI ethicists 

Gosia: Yes, exactly. Thank you, Renee. We still have time for the last question and the last one that I wanted to ask you is about people who are thinking about changing their careers or maybe young professionals or students who are interested in pursuing a career in AI ethics and data justice. What advice would you give to them? Where to start? 

Renée: I will just say, start wherever you are. Look at where you are, look at the industry you are in or the discipline that you are pursuing and ask yourself how does data impact what I am doing now? How is data impacting the future of what I am doing? How is data going to make my job relevant or irrelevant? How can I use data to enhance not only my personal performance in my job, but use data in a way in which I can enhance the broader social good? So I think it first comes with that understanding. It also comes with the understanding that data is impacting and changing everything at the moment. Data is the greatest geopolitical game changer. So know that wherever in the world you are, data is shifting the plates underneath your feet, shifting the plates of your country, reimagining what the world order looks like at the moment. The other thing that I would say when it comes to data justice, data justice is really about ensuring that we all have equitable access to the data that’s out there, but also ensuring that we have a say in how our data is being used or how our data is not being used. We have a say in how our data is telling, the narratives being told with our data and the narratives that are not being told with our data. So I would say if you’re interested in this field, I mean, you can just across the world, there are so many institutions now offering degrees and certificates and courses in AI ethics and AI safety and data justice and data governance and AI policy. Just get involved. Just get involved wherever you are. Because the main thing that I want to say to you is that AI needs you. AI needs all of us. And for us to ensure that we get the most savvy and sophisticated kind of AI, it means that we’ve all got to come together to really ensure this technology is being designed and employed in ways that could lift us up. Are ways that could ensure that we are empowered and not ways that can disempower and undermine our progress, our future, our success and our legacies.

Gosia: Yes, exactly. And by the way, talking about courses, I actually found out that you have at least five courses on Coursera. So I’m definitely looking into this and I will be following to learn more from you, Renee. 

Renée: Also the University of Virginia School of Data Science, that’s where you could find me and where you can find some of the most powerful thinking courses in data science and AI. And I will say this without apology, though, also you can find the best faculty in the world. 

Gosia: That’s wonderful. Thank you so much. I really appreciated the conversation and thank you for finding time in your busy schedule to be with us. I’m sure our audience will appreciate this conversation. 

Renée: Thank you, Gosia, thank you so much. It’s been an absolute honor. Thank you. 

Like what you hear?

AI at Scale Schneider Electric podcast series continues!

The first Schneider Electric podcast dedicated only to artificial intelligence is available on all streaming platforms. The AI at Scale podcast invites AI practitioners and AI experts to share their experiences, insights, and AI success stories. Through casual conversations, the show provides answers to questions such as: How do I implement AI successfully and sustainably? How do I make a real impact with AI? The AI at Scale podcast features real AI solutions and innovations and offers a sneak peek into the future.  
 

AI at Scale designing AI sytems for high-stakes decisions

Add a comment

All fields are required.