I happened to sit next to a CEO of NI Company on a drip plane journey for 10 ...
I was one of the few people in the world he could have talked to about that, so Seren Dippity
βmade me the world's first chief AI ethics officer.β
Wow, okay.
Always talked to people on planes, this is the motto of that.
That was Kay Firth Butterfield CEO of Good Tech Advisory and I'll guess for this week's episode of Technology Now. In the world's first AI ethics officer, that's pretty cool. Yes, yeah, absolutely. And as she said in the intro, the motto of the story is always talked to people on planes.
Now one of the last flights I've been on, I was in an in a middle seat and two blocs fell asleep on my shoulder and they took all the armrests. Anyway, today we are going to be diving deep into the ethical world around AI. I'm Michael Bird.
And I'm Sam Gerald and welcome to Tech Launch Now from HPA.
So the speed which AI has advanced has been both a blessing and some extend a curse. We've seen its use in medical discoveries. We've seen it assisting organizations around the world to increase their efficiency and productivity. And we've even seen its use in robotic bees used for crop pollination. But AI is also advancing faster than regulation can keep up with.
Yes, exactly. The fifth point in the 2025 AI index report from the Stanford Institute for Human-Scented AI stated that the responsible AI ecosystem is simply not evolving evenly across the sector. And that despite AI-related incidents being on the rise, responsible AI evaluations remained rare.
There was, however, acknowledgement that while the area is not evolving fast enough with
an industry, governments are showing far more urgency when it comes to responsibility in AI with regulation starting to catch up to new and innovative uses of AI as they emerge. So how do we keep AI used responsible then? Great question.
βAnd obviously we have our own moral compasses, right?β
But from a legislative perspective, we have experts in the topic of AI ethics who advised governments and organizations on policy. People like Kay, who you heard at the beginning, who talks to me about the complexities of responsible AI usage. But before we talk to Kay, let's take a look at one of the more novel uses of AI, which
is appeared in the past few years. That's time for technology then. This week we aren't going that far back in time, only about three years in one case and around nine months in another, because while AI ethics on filming TV might have been around since the 1950s, it has only really been a practical topic for a couple of years.
Because today, I'm going to be talking about the use of AI in the courtroom. Now both of these cases I want to cover made global news. Back in 2023, a lawyer admitted to using an LLM for legal research after it came to light the filing they submitted referenced multiple legal cases which did not exist. They were all AI hallucinations.
And the mistake was not malicious, however, it was described by the judge as, quote, "unpressed the dented circumstance," which is quite scary, really, but while this highlighted the dangers of AI in the courtroom, I want to focus on something a little bit more recent and something which I would firmly put into the heart of ethical uses of AI. OK, so in 2025, the family of a murdered man gave consent for an AI deep fake of the victim
to appear in court and give a victim statement. The victim statement presented in court was written by the sister of the deceased, and it wasn't shown in front of a jury just the judge during sentencing. It's also important though, it was not submitted into evidence, but even so questions about the ethics of using an image of a deceased person to create an AI for court were immediately
discussed by experts around the world, given the precedent it could set for similar uses in the future. Michael, I'd love to know your views on this sort of novel use of AI.
βThis sort of stuff can potentially save time, can save resources, but I think the thingβ
that we have to be careful about is hallucinations or bad data that's gone in so that there might be some sort of biases within the AI models, but I think if some of those challenges
Can be addressed.
I think potentially this sort of stuff can be quite useful. What about AI?
βPersonally, I'm honestly a little bit against it in the sense that you can't assume toβ
know, like in this case, you can't assume to know what a person would think or feel, even if you are their relative or if it's trained on data about them, humans are unpredictable. They may actually feel or think differently than we expect. Yeah. Yeah, sorry.
Yeah, it would make sense to provide statements as a sister, but to it's a bit presumptuous to then present as the actual person.
General, I never knew what true randomness it was until I had children and my goodness.
Anyway, Sam, we all know that AI has led to many paradigm shifts across society. So to find out more about how we can use it responsibly, I took the opportunity while it Davos, the other week, to talk to Kay Firth Butterfield, CEO of Good Tech Advisory LLC, all about what ethics and AI actually is. In these days, we were cooling it ethics, but now we've rather moved to Responsible AI,
or trustworthy AI, because when we started talking about it internationally, you get into this, well, who's ethics are we talking about, who's ethics do we want to promote? And yet, when we did a survey at the World Economic Forum of all the National AI strategies and other documents out there around ethical AI, it turned out that everybody was worried about the same things, but they were addressing them in different ways.
βAnd so that's how, as I say, we now talk about Responsible AI, I actually have movedβ
to, let's use AI wisely, because it takes all those pejoratives of responsibility or ethics out of it and just says, well, you know, here's a tool that acts upon us, let's use it wisely. Does it sort of boil down to the fact that the AI genius well and truly out to the bottle, I think it'd be fair to say, is this something we now just sort of have to live
with and try to figure out how we coexist with it? We do have to learn how to coexist with it, you know, AI is simply at all without humans, without our data and without us using it, it just wouldn't exist. So I think we have to put it into that context. We are going to be coexisting with AI from birth to death and we are not at the moment ready
to do that. We don't have anywhere near enough AI literacy amongst the population and
βunless we have that, I think we might lose some of the good things about AI, what's peopleβ
really worry and probably rightly about losing their jobs or teenagers fall in love with AI's, which is bad for them and humanity. Yeah. So to some extent, understanding this problem is vital to the survival of our species, it might be a slightly overreacting their problem slightly, but... Ah, well, you could be, but actually I was in Munich at a conference and the head of the
Kinsey Institute actually said that humanity was in a crisis and so maybe you aren't overreacting the pudding. And I think what he meant by that is that we are now extraordinarily lonely, but where does AI fit into that? I think the problem with it is that AI provides us a very easy way of dealing with loneliness. So you can talk to a chat bot and you know, one of the
problems of talking to a chat bot is that the chat bot will never, never challenge you
unless you ask it to, will always be nice, will always think that you pretend, because of course it doesn't think, okay, or love, or any of those things, it's always there for you. And we humans are much more difficult than that. And so it's just easier to talk to the chat bot. Yeah. There's some extent, some AI chat was going to be slightly circumvented. It can just say yes to everything, which isn't necessarily what you need.
Yeah, absolutely. And you know, when we look at what we call smart toys or AI enabled toys for the under sixes, one of the things that we are particularly worried about is that, you know, when you're engaging at that age, where all of your beliefs and values are being created, you're engaging with an AI that your parents are not monitoring. And it's
always nice to you. And then you go to school and there's rotten humans, meet you cry in
the playground. And so you know, what then happens to our society if we really choose
Our AI companions.
Yeah. Okay. So when we talk about coexisting with AI, what do we actually talk about here?
Things that we will see are, as they say, AI enabled toys for our children. We have to monitor and understand AI in education. We're already seeing studies that show, but if we use AI too much, we actually get less educated and less able to think critically. So we have to understand how much AI to use, where to use it. Obviously, it in medicine and I often tell this story where I had breast cancer in 2023. And my oncologist, when she found out what I did, she said, oh,
βwouldn't it be great if we could have an AI to walk you through your journey with cancer?β
And I said, well, let's put AI into your back office where it belongs and let's leave you the
human talking to me the human about my journey with cancer. I guess maybe what we're talking about here is letting humans who are things that humans are good at, and letting AI do things that AI is good at. Absolutely, but without the conversation happening amongst everybody, we have things like, surgeons saying, we could let AI tell somebody that they are dying instead of us telling them, because it's just a script that we learned in college. I personally don't think that's the right way
that we should be coexisting with AI. Well, then whose job is it to lead the process of, what does AI do, what do humans do? It lays on all of us that it obviously should be amongst the foundational model providers and it's worrying to see that we still have hallucinations, bias, problems with data privacy, problems with accountability, explainability, all those things because they can't correct it. And of course, you know, why can't they correct them? Well, AI is built on imperfect people,
it is built by imperfect people, is built on imperfect data from imperfect people, say it's in no wonder that it's riddled with all these problems. And do you think there will ever be a perfect model? What we have is a tool built on human frailtes, designed to repeat human frailtes and used by humans, that's not to say that, you know, it won't do fantastic things in various areas like science. But with coexisting, we humans have to understand the problems around it and work around
this problems. It isn't a magic wand and the hype is beyond hype. Yeah, why if you think there is a disconnect between the jobs that an everyday person wants to see AI doing and the ones
βit sort of appears to be doing. I think because the tool that has been built is very good at doingβ
things like podcasting or writing emails or writing speeches. So it's really good at doing those things and it's also good at faking empathy, which is really worrying. But, you know, human beings want it to do different things and they want to see it regulated, 88% of people in Britain actually want to see AI regulated. It's in the 70s in the US. I mean, you didn't know there was that balance the government are trying to strike, which is if you regulate too much, then you start for innovation
and then other countries will leap ahead because they won't have regulation versus if it's not
regulated at all, then, you know, who knows what could happen? Well, first of all, I don't think
that regulation stifles innovation, I drive a sports car, I would not be driving it unless somebody had regulated that it had good brakes and those safety measures have required innovation. So I don't think it's true to say regulation kills innovation. And when we say regulation, what do we actually mean by that? Has that practically looked at all organizations? Well, I think that
βregulation would look like safety rules for cars, for example, and that's what it would mean forβ
it for organizations. I spend a lot of time, as I said, working with, you know, Fortune 100 companies and what they need is clarity. Companies work better when they know the rules that they're working too. It sort of feels a little bit like the sort of the rise of social media back in the day like it. Do you feel like AI is maybe in that beginning of that curve where at the
Moment, AI is a bit like, this could solve everything, that I can't see a sin...
Well, undoubtedly, that's right, Tristan Harris said, I didn't know back in 2023, I think that
our first encounter with AI had been social media and that hadn't gone very well. And you know,
what do we learn to carry into this second encounter with AI? Well, not very much. How can we see all
βthe risks? And that in itself is one of the problems that I think we see for regulation and forβ
human understanding. As humans, we're just not good at looking risk in the face. Yeah, yeah, yeah, it's very true. And I guess to some extent, we're still in the quite early days of these AI models, like we still don't fully understand how people could use them and where they want you used them for good and maybe for bad. Well, absolutely. So if we take health, there's a company that has recently said, upload all your medical records and then we can help you with your medical
questions in one way. That's amazing because if you were in Rwanda, for example, where there's
one doctor for every 27,000 people, that's an amazing tool. But there are very few people in Rwanda who can use it. And it's in deeply invasive and uncompletely unprotected with your data. Yeah, interesting. Yeah. So how can AI be used along with people to assist them with work?
βWell, I think the first thing that you have to do if you're a business is train your employees toβ
understand AI. A lot of businesses have just said, you must use AI. And it's going to be one of the things in your performance evaluation, how much you're using AI. And that has led to this thing that we call work-slop where people are using AI, but haven't been trained on it. So getting it wrong, and then somebody else is going to have to spend up to two hours making it right. I think in a lot of situations, it actually comes nerves, it's going to take your job. And it enables
you to really say, okay, so I can have AI right my emails, but I need to check one of the things with hallucinations, and we've seen it in the news, is you cannot put out a report without checking that it's not got hallucinations in it. This is a tool that can really bite them back. So training, training, training. Okay, thank you so much for your time. It's been a real pleasure. It's been a real pleasure. Well, I mean, this just reinforces that AI is a tool, first and foremost,
and we have to be really careful about its applications. I don't know about how you use it at work Michael, but for me, I actually quite enjoy. I made my own AI agent. It's aptly named "Somewhat Sam," because sometimes it does "Somewhat Get Things Wrong," and it's important to just double-check
βall of it. Is it "Somewhat Sam" that I speak to when I send you messages in messages?β
You'll never know. You'll never know. I was curious about something though. So in this discussion,
she was talking at one point about AI toys for kids and Michael, you're a parent, so I'm really curious about your thoughts about these AI toys. Yeah, no. I mean, this goes into the realm of like, parenting decisions, but I am really careful with the sorts of like, even like media that my children consume. So like, I have a slightly more curated approach to things. There's not to say like, they can't like, and they're my kids are quite young. So I think that's probably what a lot of
parents do. I have the same concerns about critical thinking, and like, I'm sure you've experienced spending time with chatports and LLMs, like, and I think I said something to you. There could be quite psychophantic. We have to sort of build up that literacy and understand how AI's, but what they're greater and what are not so good at? Yeah, I agree with you. People who don't have a lot of experience with some of these tools, and maybe who also don't have as much sense of identity or positive
self-talk. I worry about the psychophantic nature of AI, and the negative impacts that lead to like AI psychosis. And I worry about people who have mental health issues, potentially being involved in down these kinds of rabbit holes, becoming convinced that this stuff is real, right? I think it comes back to, that I guess the two things that the K mentioned, which is
Understanding what AI is really good at, and understanding what AI is not so ...
the example of the surgeon or the doctor saying, "Oh yeah, we just use AI to tell a patient
βthat they've got a terminal illness." And K's point being like, "No, that's what humans are great at."β
Like, that's the thing for patients need, like, whereas AI's really good at spotting patterns, so analyzing, you know, scan data or a mass or whatever that would be, because, yeah,
she talks about hallucinations, bias, accountability, responsibility, responsibility.
Yeah, I think there's an element of like really, really, like, because it still feels like we're the cusp of the AI revolution. If AI is going to accelerate, then the safeguards have to accelerate too. I get not wanting to stifle innovation, but we can't cut corners with regards to safety, because the results could be pretty catastrophic. The safeguards are important. Yeah, and actually, like, I mean, I'm a big fan of motorsport. And actually, like, a lot of
motorsport is built on regulations, and actually innovation comes from those regulations,
βbecause you have to then be creative. Now, Sam, in a full circle moment, I actually ask K aboutβ
whether there is anything we should be worried about regarding trustworthy or ethical and AI. And I think you might find her answer a little familiar. I think one of the things that really worries me is the extent of hallucinations and deep fake evidence that's coming into court. So, for example, we're seeing deep-faked medical reports and personal injury cases. We're seeing deep-faked pictures of car crashes for claims, and it's really difficult to control. And at the
moment, the owners are on as lawyers to go through the whole set of legal documents from
βour side and the other side to seek out these hallucinations. You have to have your spidey senses outβ
for fakes and and we humans, not really built to see that. Okay, that brings us to the end of technology now for this week. Thank you to our guests,
K First Butterfield, and of course to our listeners. Thank you so much for joining us.
Our social editorial team is Rebecca Wissinger, Judy and Goldman, and Jacqueline Green. And our social media designers are Alejandra Garcia, and Amber Maldonato. Technology now is a fresh air production for Kiela Packard Enterprise, and we'll see you at the same time, the same place next week. Cheers! Bye, y'all! [Music]



