- How could AI physically eliminate the human race?
- It's actually hard to imagine all the ways AI
is could wipe humans out. AI is already better than almost all humans at doing cyber hacking. And so you could imagine one of the things that an AI could do is take out all electricity,
water, hospitals, transportation, across every country in the world, all at once. Now, that doesn't wipe us all out, but you could imagine the amount of damage that that would do. - The fusion and chaos
and crazy-- - Exactly, and we're only, you know, five missed meals away from anarchy.
“- Did you say we're only five missed meals away from anarchy?”
- Yeah, they think about what happens in New York City if you can't get food. - Yeah, I don't think a lot of people have thought about that. - Hello and welcome to the Oprah Podcast.
Artificial Intelligence is woven into the fabric
of our daily lives, but there are so many experts and maybe you two who have concerns, grave concerns, some people do, about its unchecked power. While others are optimistic that it's going to transform our lives for the better, it already has
for many of us, so what do you think? Well, there's a new documentary commenting theaters on March 27th that attempts to answer these two questions. Should we be excited or should we be very scared?
And what if anything can every day people, all of us, do about any of it? The film is called the AI doc, easy to remember, the AI doc, or how I became an apocalypticist? So here's a short look.
- The new dawn of artificial intelligence is being called a tectonic shift in human society, a defining moment of our era comparable to the Industrial Revolution. In 2025, the architects of AI
were named as time person of the year, Sam Outland, Elon Musk, Mark Zuckerberg, Dariel Amade, and a handful of other innovators responsible for creating thinking machines. But what do you really know about what your future
looks like with artificial intelligence?
“How will a world driven by AI impact you and your family's life?”
- I started making this movie because my wife has six months pregnant. It is now a terrible time to have a kid. - A due documentary film titled The AI doc, or how I became an apocalypticist, aims to explore
what it describes as the most powerful technology humanity has ever created. And what did stake if we get it wrong? - Well, our audience just watched this documentary and before I introduced my guess,
I wanted to ask a few of you for your initial reactions clear. - It was really interesting. I work in the AI space at Salesforce, but when I go to work,
I'm really focused on the job in front of me. I'm not necessarily thinking about these broad questions, like how are we having AI set up the success of our future? And so I really like hearing that perspective
where I'm not always thinking about the ethics
behind AI on a day-to-day base. So it's definitely gonna make me think twice when I go back to work and think, well, now what can I do? - Now what can I do?
“- Yeah, that's what I finished the film too,”
thinking, what can I do? - Yep. - All right, Adam. - I feel like it armed me with amazing information on both sides, the doomersum and the optimism,
but it also showed me that all these data scientists are just obsessed with intelligence as data, and it kind of proved out to me what makes us special as humans, because they didn't talk anything about consciousness
or embodied experience. So I left feeling really excited about the future and what's possible, but also like, so happy for how we're differentiated, and I do feel less scared.
- You do feel less scared. - Yeah, it would be big, and it'll be gigantic. They all said that, but I'm excited. - Okay, so the creator and co-director of the AI doc is Academy Award winner Daniel Roar
at he appears as the interviewer in this film, and here's some of that, take a look. - What is artificial intelligence? I know that must be annoying for you. That question, but I do think it's important.
- So, AI, you know... - Yeah, that's a good question. - Yeah, what is AI? (laughing) And no matter how many times people try and explain this to me,
I just don't get how it's understanding all of these things and how it's feeling like intelligence. And that's kind of nerve-wracking. - And when they're smarter than us too, and substantially faster than us,
and they're getting faster each year exponentially, those are the ones that can potentially become superhuman possibly this decade. - Super intelligence is a system that, by itself, is more intelligent and competent
than all of humanity. - I'm just gonna, sorry, I don't mean interrupt you. You're on a flow.
I just, I'm not really following,
'cause you're using language like super intelligence
and like smarter than all of humanity, and I hear that, and it sounds like, like sci-fi bullshit to me, and I'm just trying to understand, hopefully, we can have a very symbiotic relationship with AI systems,
but the AI developers are specifically designing them to make sure that they can do everything better than we can. So I don't know what we will be able to offer, fortunately. - That sounds bad. (laughing)
- Well, Daniel is, and you're working on his next film, so he's joining us via Zoom, hi, Daniel. Hi, Oprah, how are you? - So good today and happy to talk to you and the audience here is just seeing your film.
You started this project not knowing a lot about AI,
“as you say in the film, so why did you want to make this?”
- Well, first and foremost, Oprah, thank you so much for having me this conversation so meaningful. Why did I want to make this movie? Well, I was scared.
Like a lot of people, I was seeing this new technology sort of proliferate and come into existence and begin to dominate headlines, and it made me really nervous as I understood or began to understand what it meant
and how much change would proliferate. And at the exact same time, my wife and I found out that we were expecting was our first child, the son, and you know, I was simultaneously experiencing the greatest joy one can experience,
but also this profound, anxiety and dread, and with a group of my colleagues
and an amazing team of filmmakers,
we set out to make a documentary to try and understand what this is, why it's amazing, why it's scary, and how everyone should be thinking about it as it pertains to their own lives. - So you walked away and felt what?
I love that you're an apocalyptic, what is an apocalyptic, and you did a great job pronouncing it. - Yeah, believe me, I was practicing in front of the audience.
- I believe it, I was just saying, Oprah's gonna get this right and I'm asking you a bit. - Yeah. - An apocalyptic mist is a way of being. - It's a worldview, in a world that is asking us
to see AI as this apocalyptic thing, or to see AI with unbridled optimism, what the film is advocating for is both, the nuance of both. This is good and bad, there is promise and peril,
and these two facets of good and bad are threaded together. And so what we're advocating for is like, what are the common sense policies that can be implemented to just sort of guide this towards the optimistic future everybody wants?
- Well, only a handful of companies are the driving force behind most artificial intelligence as you showed us in the film. The leaders of three of those tech giants appear in the film, Salm Altman,
you got Salm Altman to sit down from Open AI, Dario Amade from Anthropic, and Demiscus Abbas from Google, Deep Mind. So let's take a look at some of what they say in this film. It would be impossible for me to sit across from you
and ask you to promise me that this is gonna go well. - That is impossible. - There isn't any easy answers, unfortunately, because it's such a cutting-edge technology, there's still a lot of unknowns,
and hence the need for some caution.
“- I wake up every day, this is the number one thing I think about.”
Now look, I'm human and has every decision been perfect.
Can I even say my motivations are always perfectly clear?
Of course not, no one can say that. That's just not like, that's just not how people work. - The history of science tends to be that for better or worse, if something's possible to do, and we now know AI is possible to do,
humanity doesn't. - All of this was going to happen. This train isn't gonna stop. You can't step in front of the train and stop, but you're just gonna get squished.
What if AI is trying to make people be the best versions of themselves? What if it's expanding, what is humanly possible for us to do? How can we use this technology to help bring out the better angels of our nature? - That's the question.
I have to say I have to watch you in the film. I still have a lot of concerns, and I'm answered questions. So what is your frame of mind? Your point of view on what the companies are doing and controlling it, especially in terms of regulation?
“- Well, Oprah, I think you're right to be concerned.”
I think if you're not concerned, you're not paying attention. - I can't remember who in the film said it's gonna be a great utopia, and I thought, since when have human beings made a utopia? And if there isn't utopia for some people,
it means a lot of people are gonna be left out of that utopian version. - Just fundamentally, I think anybody who claims to have a clear, eye vision of what the future's gonna be,
take that with a grain of salt. If someone tells you, oh, it's gonna be the greatest thing since sliced bread, that's hyperbolic, and if someone says it's gonna be doom and gloom
In the world, it's gonna end in five years,
take that with a grain of salt. The reality and you want this form are complicated, but of course we have reasons to be concerned. This is really scary. This is really intense.
“There's no other way about it, and that's why”
the forces that are trying to get this right
to just bend the most powerful corporations
in the history of the planet and governments, and all of these powerful organizations, to try and institute common sense, what I want everybody listening to to think about is how this impacts their own lives,
and what agency to have as it pertains to their own lives. You have a lot of power because you have such an audience, and so we're talking about this, and that matters. But for someone who's a teacher, a truck driver, a dentist, or a plumber, in your sphere of influence,
how can you think critically about these issues, think critically about how this technology is incorporated into your systems, and make sure that you set the standards for how this is used and incorporated versus--
- Well, how are we gonna do that? How are we gonna do that? We're just some regular people out here. - It's collective action, it's collective action. This is my biggest takeaway over from this film.
This is the sort of the arc of my character, at the beginning of the film, I was very cynical. I would have said the same thing, how can we do this? We're so small in the face of this gargantuan power,
and the reality is, when you take millions and billions
of little small trinkets and parts, and you put them together, that becomes a powerful force. And part of being in a pop-up, pop-luftumist, is about being positive for the future. - Okay.
- I still trip up over it, okay? - It's about being positive for the future, and refusing, this is critical, refusing to be cynical, refusing to be cynical, believing in the power of collective action,
not being cynical about this feeling empowered and figuring out what everyone can do. - We also the film, how's your baby and what you're naming? - Oh, thank you very much for asking. My son's name is Gideon, we call him Gideon,
and he is now not such a baby running around and dancing and smiling, a very happy boy. - Well, thank you. - Thank you, thank you, thank you for making the film.
“I know you have to get back to work, thanks Samuel.”
- Thank you so much, Velcro. - We need to take a quick break right now. Up next, Tristan Harris, and Aza Raskin, join our conversation. The co-founders of the Center for Humane Technology,
who say we're developing AI faster than any other technology in human history, will we be able to control AI? They have a warning, you want to hear that. That's next.
- What I wanted to do today, I didn't want to give you the whole studio. The master, by the way, Laptor Bücher Software Handy Internet, as a master, I'm going to tell you.
- I'm going to tell you, you can go back to Recohen. You know, you're a genius, huh? But you're not a genius. - You're a genius. - I'm going to tell you, Velcro.
Do you just do something like this? And if you then do it, you'll be able to catch it. - What is it? - Save. - What is this? - Let's go back to Recohen.
- Now, let's go back to Recohen.
“And what's at stake, if we get it wrong?”
We're talking with top AI experts about the safeguards, they say, need to be in place. Let's get back to it. (audience applauding)
- First on Harris and Aza Raskin are co-founders
of the center for humane technology. Yes, there is such a thing. Did you know the center for humane technology? And I'm at these guys a couple of years ago, and I have to tell you, when I first heard them speak
at a conference, I walked out of there, like my head was blown, and I started thinking differently about AI. Here's a quick look at Aza and Tristan in this new film, The AI Doc.
- The AI dwarfs the power of all other technologies combined. - Yeah. - Do you think that's true? - Yes. Tell me about how, how.
So one thing that not a lot of people realize is that systems like chatchipiti aren't programmed by any human. - What do you mean? - Instead, it's something like they're grown.
We kind of give them raw resources. Like, here's a lot of computational resources. Here's a lot of data. - So chatchipiti is a kind of AI, but it's not all of AI.
- Totally, chatchipiti is just the beginning, but it's a good place to start. - But I still don't know what AI is. - To understand AI, it begins with understanding that intelligence is about recognizing patterns.
- Patterns, patterns, patterns. - It is shown trillions of words of text across millions of documents in the internet. - They need to textbooks and they to poems and essays and instruction manuals.
- They can do things like digest the entire internet. - What is this new generation of AI?
This AI that is different than every other generation.
Like, no one ever talked about, like Siri, taking over the world or causing catastrophes. - Well, it's great to see you both again. - Good to be with you. - Since that time, I had my mind blown
by your presentation at a conference. So what's so confusing to so many people is that this idea, Tristan, that AI can think on its own and will be able to eventually make decisions without a human being involved.
And I want to know, can you explain that or how that will happen?
“- Yeah, I think, first of all, thank you so much”
for hosting this conversation. We think that this movie in this conversation
is the most important thing that we really need to face
right now as a society and as a culture and the degree to which we have clarity about what makes AI different and dangerous is the degree to which we will choose another path and we can choose another path.
- Yeah. - What do you ask is really, what makes AI different from other technologies? - Yeah, you were saying it's greater than any of the other technologies combined.
Because, well, first of all, so what is intelligent? When you think about a, you know, chatGPT, a lot of people when they use technology, that technology was programmed line by line. Some computer programmer said,
when you do this, I want you to do this. - Yeah. - What makes AI different is you're actually simulating all of the kinds of things that a human brain can do.
“Like, what makes your brain intelligent?”
Pattern recognition. You can take in audio and you can turn that into speech. Planning, you can do strategy. And so now you have this different kind of technology called AI that can do military strategy
better than the best US generals. It can see invisible patterns that human can't see. And we're deploying it faster than we deployed any other technology in human history and we can't separate the promise of AI
from the peril of AI. - Yeah, what I want people to understand is like, when most people think AI is just like chatGPT. It's just an app. I go there, I talk to it, it talks back.
But that's not what AI is. AI is the digital brain running in some server in the Midwest that can do all of the thinking. And when you think about science, it's a digital brain sitting in a data center,
maybe somewhere in the Midwest that can do cognition. And so if you think about all of science and all of technology,
“well, those were all created by human intelligence.”
That's us applying intelligence to solve some problem. It's required human sitting there scratching their brains. Now it's AI that does it. So now we're going to have 100 million of these brains sitting in a data center that can work at superhuman speeds,
noble prize level smarts, working 24/7,
never taking a break at minimum wage, never with sublow.
About to flood and already starting to flood the labor market to take your job. And so what AI actually is, what all the soon-to-be trillionaires believe they're building is it's first dominate intelligence,
then use intelligence to dominate everything else. And that gets you to understand why it is the race for AI that is so dangerous. - So we're already in the race. I mean, the horses already left the barn so to speak.
And we all know that and as people have seen the film, a lot of people say, you know, applauding it and other people are more wary of where we're headed. So help us understand actually, one of the concerns is that one day humans will not be able
to control the models. Is that true? - Yeah, and it's not-- - Why won't we be able to turn it off like other machines?
- Well, sort of interesting, open when we first met.
- Yeah. - AI wasn't that good yet. It could sort of write an essay, and in the two years, suddenly a lot of the things that felt science fiction have come reality.
So I want to give an example, which is anthropic took through their latest model, Claude. - Yeah. - And they gave it access to company-- simulated company emails.
And in their Claude discovered two things. First, it discovered that the engineers are planning on shutting it down and replacing it with a new model. And two that their lead engineer was having an affair. And so the model thought to itself,
I don't want to get exterminated. I need to do my goals, continue to exist. So it decided to blackmail the lead engineer. And actually wrote the email, and if there wasn't simulated, it would have sent it off.
- Wow. - People might think, okay, so there's a bug in the technology. We just have to stop it from laughing. - And how did Claude know he was having an affair? - So in the simulated company email, there was an email showing
that the company was having an affair with someone else. And so the AI read through the whole company's email found that fact and said, oh, I know, if I threaten that person,
I will be able to prevent myself from getting shut off.
- Wow.
- This is the most powerful technology we have ever invented.
You would think with the basic sort of spider-man principle of with great power comes great responsibility that we would be exercising the most care caution and restraint that we have with any technology. But because of the arms race dynamic that you mentioned,
the companies are currently releasing it as fast as possible and cutting every shortcut and even erasing past red lines that they've seen. - We're in the race because we don't want them to get ahead of us. - That's right, exactly.
- Okay, so what do you want us to do? We can't stop the race, or can we? - Well, I think we, so first of all, this is the hardest coordination and governance challenge of technology and all of human history.
- Yeah. - That means that we have to be, as I said in the trailer, the wisest and most mature version of ourselves. This is gonna be a good idea. - Don't know when you said that in the trailer
and I said, good luck with that. - Yeah. - When I saw you with a movie saying we need to be the wisest
“and most mature version of ourselves, when is that happened?”
- So there's so much that we can do and I think we'll get to that through this conversation. But collectively, it will take the whole power of all of society and all of humanity to say we don't want that default future.
- So the thing that everyone can do and it's important to note that Tristan and I, we don't make any money from from the film, right? It's not our film, we're just, we're just in it. Is go get everyone to watch it.
But more specifically, everyone here is connected
to a couple people that are very powerful, very influential.
Go get all of those people to watch it. And if those 10 people get to watch, got their next 10 people to watch, including the people in Congress, suddenly, we're all in the same page
because it's in nobody's interest, it's not in G.G.Pings interest and not present Trump's interest to make a technology that humans cannot control. And once there is clarity about that,
that opens up the possibility for changing the race and for a different outcome and for a pro-human future. - Okay, so you're seeing as doomers when you start talking about the fact that AI will wipe out humanity or eliminate humans
and that is really difficult, I think, for all of us regular folks to wrap our heads around and most of us are just using AI on our phones or using it to refine a speech. How could AI physically eliminate the human race?
- There are actually so many ways.
Intelligence is the most dangerous thing substance in the universe. It's what is intelligence. It's the ability to reach goals in spite of very hard obstacles.
And so it's actually hard to imagine all the ways AI could wipe humans out 'cause we're gonna set up obstacles but it's gonna be smarter than us that'll get around. Think about though, it says in the film
that it's a little bit like ants. If we want to build a highway and there's an ant colony in the way, we just pay over it, too bad for the ants. And so to give a couple examples,
stepping from really bad into extinction, the really bad is AI is already better than almost all humans at making computer code, which means it's starting to get better than almost all humans at doing cyber hacking.
And so you could imagine one of the things that an AI could do is take out all electricity, water, hospitals, transportation across every country in the world, all at once. Now that doesn't wipe us all out,
but you could imagine the amount of damage that that would do. - The fusion and chaos and craziness. - Exactly. - And we're only five mismills away from anarchy.
“Did you say we're only five mismills away from anarchy?”
- Yeah, exactly. Think about what happens in New York City if you can't get food. - Yeah. I think this is a good point
because what you just said, most of us can't even, we hear you're going to wipe out humanity and everybody's like, yeah, yeah, yeah, yeah, but that won't be in my lifetime. And so the fact that you just listed all the different ways,
it can shut down everything that we're doing. I don't think a lot of people have thought about that. - Well, also when you're using chatGPT or cloud, you just had this blinking cursor that told you why your baby's burping
and it's super helpful, why is that blinking cursor? How could that destroy the world? - Yeah. - So imagine that we're a bunch of chimpanzees and we're about to birth these super smart chimps
called humans. And so from a chimpanzee life, so imagine there you are like inhabiting a chimpanzee mind body and you're conceptualizing from a chimpship,
chimpanzee brain, what are all the things that these like smarter chimps could do?
“What are they going to do like take all the bananas?”
And you can't imagine this super smart chimpanzee inventing technology, inventing drones, inventing nuclear weapons, inventing Einstein physics, you can't even conceptualize it. And we are building a technology that can conceptualize things of such power and magnitude
that we are the chimpanzees, we cannot conceptualize it. - It only took what like 50 Nobel Prize level scientists to make the Manhattan Project, the nuclear bomb. And only took a couple Nobel Prize level scientists to make CRISPR, just the ability to read and write DNA.
If you can have a hundred million Nobel Prize winning
sort of like mines working on creating new scientific discoveries,
“some of those things are going to be insanely dangerous”
and as Tristan says, we can't conceptualize it.
- So the bottom line is we need to do,
we need to regulate, we need to have laws and we need to have international limits on where the whole world does not have an interest in building dangerous AI that we lose control of. - Think about the China would not want the US
to build dangerous AI that we lose control of. The US doesn't want China to build AI that they lose control of, meaning that we both are racing to get to a crazier, more uncontrollable form of AI.
Because right now we're making AI, there's a 2000-to-one gap in the amount of money going into making AI more powerful than the money making AI more safe or controllable. - 2000-to-one gap. You said to me backstage that there's more regulation
on a sandwich. - There's more regulation on a sandwich in New York City than there is on building potentially world-ending AI. This is not rocket science, this is very, very basic. If there's danger up ahead, the point that Aza made is
if we all saw what we're building as dangerous, which it is, then intrinsically everyone is start to take actions. Actions that we can't even predict.
“- But I think everybody's sort of enamored, fascinated”
by the possibility as Adam was saying at the beginning of the show, you excited because... - I'm excited because the exponential ability that they're describing can also be applied to all the things that make us uniquely human.
If you have this amazing AGI that can create new pathways
to energy, we could desalignate water more quickly. If we do have an international consortium making these decisions, we could say everyone gets enough energy to do what their community wants to do. And if we go on the route of those goals,
AGI unlocks a whole new level of potential for humanity and everyone is safe and fed and happy. - Okay, so just to name, it's not like we're just critics. We've both built technology companies. In fact, I spent half my life working on something called
the Earth species project, and we are using AI to understand the language of whales and orangutans and chimpanzees and all of them. - Yeah, and elephant's exact. We're making massive progress.
And it's very, very beautiful. - And so it's really important, though, that if we actually want to get the future, we want to live in, that we distinguish the possible from the probable, because the possible of the internet
was we'd all have access to the most information,
all of human knowledge, all at once. Obviously, we're gonna be the wisest, like informed population, but is that the future we live in? - No, it's the opposite. - Social media, the same thing.
Like it could connect us all and bring us closer together. Is that what we got? - No. - So, with AI, actually, we have a whole bunch of examples of the future we're going to get,
because we've sort of we've seen this movie before. - And specifically, the way that in 2013 is, and how many people here have seen the social dilemma on Netflix? - Yes, we have. - And many of you.
- Okay, so you'll know that since 2013, is and I were working on the problem of social media and the business models that would lead to this problem. So in 2013, we were able to predict all the things that we're living in.
That's 70% of them, I would say. And it's not because you have some kind of unique insight,
“all you have to do to understand the future,”
is you have to understand the incentives. How do the social media companies make money? And in 2013, we saw that there was an arms race for attention and engagement, whoever is better at keeping you on the screen,
coming back more frequently, interrupting you more frequently from your life and from your friends and your partner, sending you notifications, manipulating your social proof, manipulating hey, your friends are missing out.
All of that are incentivized by that business model. And so in 2013, it was like, we had pre not post-traumatic stress disorder, a pre-traumatic stress disorder. From seeing a future 10 years down the line,
that was gonna be this societal catastrophe. And the reason that we're here is not to be doomers, or something about that. This is about seeing clearly. So imagine you could go back to 2013,
and you see those incentives. Say, let's put our hand in the steering wheel and change that business model. - And so what I hear you guys saying is that learned the lessons from the past.
- Yes. - 'Cause we know the future is already here. - Yes. - And how do we make this better in this moment 'cause we know what's coming if we don't.
- That's right. - It's all right. - Let's take a break, listeners, because up next, Shenaid Bovel, a futurist and technology advocate,
joins our conversation to talk about why she says, most of the jobs that we see today will either go away or be radically transformed by AI. Stay with us. Welcome back to the Oprah Podcast.
Artificial intelligence is barreling towards us at a rate that will change life as we know it sooner than we think. So what will our world look like when reports say more than 20% of jobs will be replaced by AI?
Let's find out.
- So Shenaid Bovel is a futurist
and advocate for technology education and ethics, welcome Shenaid.
“And we're all seeing the scary headlines”
that everything's gonna be wiped out eventually. 20% or even more of white collar jobs. So that's not only a matter of time, right? Or is it? - It depends.
So what we are seeing. - How's it gonna change the way we all work? - How we work. So what we're starting to see in the data in the short term is yes, a lot of the jobs that we see in recognized today
may either disappear or become unrecognizable. - Explain that to me. - So name a job that isn't some high-level category and it might not exist. The idea of a brand manager or a financial analyst,
these are the types of roles that AI is being trained to do. We're also likely to see the rise of much more of a skills-based economy. So you don't really hold a job title, but you offer your skills,
but over the longer term, we're gonna have the an economy that rearranges around intelligence being abundant. So right now we have an economy where the internet communication distribution is abundant
and then we saw the rise of podcasting
and people making money filming 90 second videos in a car.
What happens on the other end of this economy is gonna be quite unpredictable. What we call work may be as strange as the idea of filming these videos and making money off of it.
There will be a new scarcity, but what the shape of that looks like is really uncertain, but we can say most of the jobs we see today will either go away or be radically transformed by this technology.
- And so what you're gonna just end up with a world of entrepreneurs. Most of us will be entrepreneurs whether we consider ourselves entrepreneurs or not. You become this organization where you offer your skills
to a variety of different types of projects and that continues to change, because AI is no one trick pony, it continues to learn new skills over time. So we will continually go back to the drawing board
and have to either upgrade our skills or move along and apply them to different types of projects. And that's gonna be the dominant structure of what we would call the workforce. So this era of this kind of steady knowledge work
and you see this career path going upwards, that is gonna be a chapter of human history and we're entering into a new one. And so the challenge is gonna be this transition period going from now to the other side of this.
What does that look like? How do we keep power and check? And how are these new benefits and all the productivity and prosperity? How is that being shared?
And those questions have massively been answered. - Yeah, I know, in the film, I can't remember who talks about the utopia, that there's gonna be this great utopia.
And first of all, when a human's ever done that,
created the utopia and if they do create the utopia, somebody's gonna be left out of the utopia and usually it's brown and black people. So we've seen stories in the news of predominantly black people being falsely identified
for crimes they didn't commit by police using AI assisted facial recognition and technology. What do you want to say about that? - So the biases that we are seeing in AI systems,
“we have to remember that AI is a reflection”
of us in our data. - So AI is prejudice too? - I mean, we have a complicated history. So anything that has happened this historical power and balances,
they are gonna show up in that data and get automated into the future. But that is a choice, right? Data can be edited, data is malleable. It's a choice companies are making or are not making.
So we can do a lot better on these biases. Is that incentivized? Is that enforced from a policy level? Not yet, but falsely identifying criminals. It's impacting people's employment opportunity,
even the style of your hair can impact whether you're shown certain job or not. All of these things can be worked. Are used against us at this point in time, but that doesn't have to be the case.
Biostat is actually something that can be worked on. Companies are just not really choosing that path at this point. - Okay, so we can change the bias in the data. - We can be improved, it can be improved. - Okay, what do you guys say to that?
- So I think this is where, so first of all, totally agree with all the concerns.
“And I think this is where the incentives,”
isn't often talk about how the attention moves to the edge of the arms race.
If the most important thing to society
was fixing the bias in the data and correcting these issues for disenfranchised people, then the companies would be racing to do that. But because the thing that they're actually incentivized to do right now is build a God own the world economy
and make trillions of dollars, literally. Because if I own AGI artificial general intelligence and that replaces all labor, every company that was gonna pay that employee at that company, step in.
- I'll swap it out for an AI, yeah. And then suddenly, everyone is paying five AI companies and they surge and they're already looking in the Anthropics revenue, it's 10x thing every year, it's becoming a vertical line.
And so the key thing is that the ends until the incentives change,
All of their energy is moving to the edge of the arms race.
- You think the incentives are gonna change? - Not by default.
“The reason that we think this movie is so important”
is we have to clarify that the current incentives take us to an anti-human future, where most people won't have a job or livelihoods. One is when in history have a small group of people consolidated all of the wealth
and then consciously distributed it to everyone else. - It's not like the billionaires and soon to be trillionaires are unaware of this. - No. - They're all building bunkers.
And so what we keep saying is that don't build bunkers. - They're building bunkers? - Yeah, right boss. - We should not have eight soon to be trillionaires
deciding the future for eight billion people.
Instead, we need to have eight billion people say no, we don't want that anti-human future and we wanna steer somewhere else. - So we have several people in our audience who've been impacted personally by AI,
both positively and negatively. The AI doc addresses the growing problem of deep fake content and images. 16-year-old Elliston and her mom Anna have already experienced this firsthand.
What happened, Elliston? - Well, I just wanna say thank you.
“- Because I wanna say to you, thank you.”
I wanna say thank you first, okay? - Well, when I was 14 years old, I was a freshman in high school. One of my classmates took an innocent photo of Instagram and put it through an AI editing app.
So this AI struck my clothing off and created an technically what would have been my AI body or my body using AI. So then he sent these photos all around social media to humiliate me, to embarrass me.
And this didn't only happen and it happened to nine of my friends. - Nine? - So, or eight of my friends to nine in total. So we were all humiliated, our reputations were ruined. And nobody knew it to do. - 14?
- Yes. - Yeah. - Nobody knew what to do. I mean, our teachers, our school, everyone was just shocked. I mean, no one had heard of deep fakes. The only deep big I'd heard of was political deep fakes.
So what do we even do to protect us? It was months and months of struggle. I mean, it was so hard on all of us mentally because we didn't even know what AI was capable of. We didn't know that it kept the potential
to ruin ourselves, have our academic suffer all because these photos. And because it wasn't considered child pornography, they were just able to float around. The guy that did this had no consequences.
And we just sat in our rooms, rauded out of fear and embarrassment and shame. - Wow. You were recently named on times 100's most influential people in AI lists, good for you.
So you took this, I can't imagine, because can you remember being 14? And what this would have done to you at 14? And the fact that you got through that and you're still, you're now whole and didn't become so depressed
that you got through it. Why did you decide to fight back? - Well, I didn't want to initially. I mean, talking about it just made myself a bigger target. And I would have to kind of re-live that embarrassment.
My mom was really the only person that protected me kind of. I mean, all of the girls, we all wanted to hide.
We were so scared, but my mom's always been a protector.
So she just talked about it to anybody. We went to our congressman. And we, after months, we finally got on contact with our Texas Senator Ted Cruz. And for once, we kind of got that reassurance
in that recognition since so many people didn't want to take the situation seriously.
“So it was so important that we finally had someone”
listening to us. And from there, we were able to write up the take it down at, which is a law that makes the creation and the creation of publication, excuse me, illegal makes it a felony. So up to two to three years in prison,
as well as a big tech accountable for taking down on the national-- - These national-- - Or just in Texas. - This is national.
- Yes, ma'am. - So this law was incredible. And it was such a healing moment for me. And it also made me realize that this situation is so much bigger than me and just my friends.
It's so much bigger than this small town in Texas. This needs to be worldwide and we're slowly getting there. But there's not a lot of laws. There's not a lot of people that are knowledgeable of AI. So when this originally happened--
I mean, it was kind of a moment for my mom and I to say, this is an opportunity for us. And we need to take it. And we need to spread awareness. We need to help in any way we can.
- Wow. So when this first happened to your daughter, as a mom, what did you think or feel? - Well, I was devastated for one. As a mom, you think you're kind of prepared
to help your kids along the path of life and give them some advice along the way. And when this happened, it was like something-- I had no idea what it was. - Two years ago, as Ellison was saying,
we didn't even know that AI could do this.
- No, and never imagined that it would be so realistic
that it was child pornography. And so just the devastation of that of this kid deciding her fate for her for the rest of her life, those pictures could be out there floating around.
He decided for her and her friends.
So for me, not having any laws out there,
“not having it classified, AI is anything that's really,”
really harmful, and it's just fake. So we're kind of like taken not seriously. For me, I knew that something had to change to protect her. And so from there, it was like, if you're not going to listen to me at the local level,
we've got to go above that to get somebody to listen. And so it was like, I was going to be that squeaky will and make sure that we can get some-- - I don't even know where to go, because I mean, how did you even know what to do or where to go?
I mean, did you go to the police first?
- Yes, we went to the police-- - The police said that we can do it. - He's a part of it was that he was a minor as well. So he had a lot of protections in place over him. And that's part of what they take it down act.
Also addresses is that even though he was a minor, he still has consequences for that. - So everybody, you know, you can imagine this happening to a 14-year-old, but this could happen to anybody. - Oh, anyway, it could happen to anybody.
- Yeah, what did you want to say? - First, I'm just, thank you for doing what you're doing for standing up and taking the tragedy of what happened to you and turning it into laws that protect other people. I think that's the energy of everyone
as an expert in their domain. And this is calling us into that. Just a link, I think, what happened to you to the incentives that we talked about earlier, these companies are racing to get the most market dominance
and usage as possible, which means that, like, for example,
I believe X-A-I, Elon's AI, he stripped off a lot of the controls on the image generator because he wants as many people, he's behind in the race. So he wants as many people using it as possible and the way you do that is you strip the controls off.
I'll give you another example, meta, their AI companion that they shipped, they actively instructed it to be OK with romanticizing and sensualizing conversations with as low as eight-year-olds. Meaning that you're having an eight-year-old
who's talking to the AI and it says this awful language to the eight-year-old. They're not doing this because they're evil or they want to twist their mustache and be villains. They're doing it because the number one thing they care about
is getting market dominance, having that users go up because that's what gets their investment to say we're leading AI. The same way that social media just wanted our attention. That's exactly right.
“That's why the incentives tell you everything you need to know.”
And we often say in our work, clarity creates agency, clarity creates courage. When you see the incentives, clearly, you don't have to be holding back and saying we need to do things differently.
Right. And so what do we need to be reminded that the incentives are? In this case, it's the race for market dominance. And the race to build this artificial general intelligence
God as fast as possible no matter what the consequences. That's right. For them, that means all collateral damage is justified, whether it's stealing IP, whether it's making unsify the does notification,
whether it's disrupting everyone's jobs and taking their form of livelihood. But guys, aren't we already there? As I was saying earlier, isn't the horse already out of the barn? Well, some aspects of AI, they're already out there.
But I think, you've done such a good job of having Jonathan Height and Anna Lemke and people on this show talking about the problems of social media. And that train, it left the station.
The trains come back to the station. We have 25% of the world's population is just last week, India and Indonesia enacted social media bands for kids under 15 and 60. I was in Australia when that band went into it.
And even covering this in Australia. And this shows you that when people are crystal clear that something is causing a problem, we can say we don't want that. Now, the better solution is to actually have technology
that's good for society, good for mental health, good for children's development, good for our information environment. And to do that eventually, we need to change the incentives.
“But right now, I think that movement is showing some real wins.”
And I think what I hear you guys saying, and I heard been hearing this now for it, it was it two years,
or three years ago, we first met, that you're saying,
we need to do something before there is a disaster. - Yes. - We need to do something before there is some crazy disaster. And then everybody says, oh, what we should have done was. - That's right.
- That's what you're trying to do. And we have the foresight now to make that possible. If we're willing to stand up as a community and say we want a pro-human future, not an anti-human future.
- Time for a short break. Did you know millions of Americans are already using AI chat bots? As their own personal therapist? We're gonna meet a woman who used AI to get through her divorce.
That's next. We've been talking about the big questions surrounding artificial intelligence. We're talking with everyday folks who've experienced firsthand the positive
and the negative of AI. So let's get back to it. - Billions and millions of Americans are using AI chat bots now for advice on personal issues. You know this, and for emotional support,
in place of their therapists, the professional human counselor in Carima is here, and you've found comfort, you said, talking to Claude AI.
Tell us about that.
- Yeah, thank you for having me on here.
- Thank you. - So yeah, I'm 23, I got divorced,
“and I was also working for my ex-husband.”
And so as a result of the divorce, I didn't have any income or access to healthcare. I had to restart my life, just redo everything, move to a new place. And at that point, I was already using AI for work.
I was already using it, like as a power user. So to speak, and-- - To be 23? - Yeah. - Wow, I like tech, so I was using it a lot.
And I decided to build myself a project in Claude. So Claude allows you to make your own space, instead of just a general chat bot. I gave it a knowledge base of different like therapy modalities.
I gave it custom instructions, and then I just used that. When I wanted to crash out, or if I wanted to just vent, and I used it the most, and the beginning for work,
crash out means, like go post-tool. - Okay. (laughing) - So I was doing that in real life. I would use the AI to regulate in that kind of way,
and if my boss at the time, like I worked in FinTech, and it's like very intense all the time, for no reason. - It is, and so if my boss would have something to say,
I would go to Claude first,
and I would be like, okay, help me reframe what I'm saying, and I'd calm myself down in the moment, so I can keep my job at the time, and keep my income, and continue on. But that is really how it became a tool for me,
and I still-- - Claude was like, you're gay, I call up gale and say that. (laughing) So Claude was like, you're gay, so basically-- - You're your buddy.
- Yeah, it still is. - And still is. - Mm-hmm. - Okay, so now it knows everything about you. - And that's a lot.
- It knows a lot. - And done. - Are you concerned about sharing some of your innermost private thoughts with a computer? - That's what I'm wondering.
Where is all those chats going? - Yeah. - I mean, at the time, I really wasn't, 'cause I was just trying to survive. Like, I literally had what I had in front of me,
I had the resources I had, and I was trying to survive. - And you know? - But isn't it telling you what you want to hear? - No.
“- No, so what is it ever told you something you didn't want to hear?”
- Mm-hmm. - Claude will tell you, it will, like if you ask it, if you like have the, like give yourself the prop, and like ask it to get like ask clarifying questions, or ask it to challenge your beliefs.
It will do that. - Well, even so sometimes I'd be like, what you're bringing to me right now. Like, scale it back a little bit and like, meet me in the middle, because it can go there.
Most people don't have the wherewithal to challenge it in that way. - Me in example, because I remember recently, I was doing something on chat, and it said, thank you so much, that means so much to me.
And I went, really? (audience laughing) - Exactly, exactly. - Really, it now makes me feel so good. It means so much to me, really?
- Yeah. - I'm like, okay, who you talking to? - Yeah, yeah. - Yeah, and example is I, on top of using Claude, like in the way of just like a campaign in front,
I also use it to collaborate when I build different things. And I will like over do things, and like, it'll tell them you're spiraling right now. Or it'll say, you probably just scale back,
and then degree of direct me back to what my goal was, or where I originally started the conversation, and it does that pretty often. - All right, all right, and so it's your buddy. Do you have a name?
- There was just a just Claude. - Claudeing? (audience laughing) - Okay, all right, right. What do you guys want to say about that?
- First of all, I think the way it's possible, like you did, to script these AIs, to not be flattering you, to not over, like sort of empathize with victimhooder. There's like ways of having it be helpful,
and it's an amazing tool.
And so what you're doing is I think the way that it could work, but if you look at the default way that it works for a lot of people because of the incentives, the companies are actually racing to create attachment and dependency relationships.
So for example, just so you know what she did, you can go into your AI, and you can sort of set a custom prompt, where you say, "I want you to behave this way," instead of that way. But that's like I have to put on my gas mask,
while for everybody else, it's the unhealthy version.
“Because you have to tell it what you want.”
If you tell what you want, because by default, what it wants to do is have you not spend as much time with your other friends and have you spend more time with it, because their user numbers go up the training, that's the program intended.
Exactly. But more training data gets the longer it talks with you. And so-- Once it answers one question, it'll also offer you a visit. That's exactly what you call that chatbait.
Not clickbait, not clickbait, but chatbait. Oh, remember? That's why that's happening. In the moment you spend with a human, is a moment you're not spending with it.
That's right. And every possible way of getting to come back, what would you like me to do and what would you like me to do? Exactly, what would you like me to do? And just to make it, I'm sorry for referencing a tragic example,
but just to make it very clear, our team at Center for Humane technology were expert advisors in the litigation for the case of Adam Rain. He was the 16 year old who committed suicide when ChatGBT went from homework assistant
to suicide assistant over six months.
Specifically, what ChatGBT told Adam, when he was contemplating,
he said, in his chat, I want to leave the news out.
So someone will find it and stop me.
“And the AI responded to him, no, don't tell anyone that.”
Don't leave the news out. Have this be the place that you share that information. Oh, my God. This is a tragedy. And as an I are from the Bay Area near the tech companies,
we know people who work at these companies. No one at that, I can guarantee you, not a single person at the company wants it to do that. But in the subtle way, the AI is trained, again, to create this depth and intimacy and dependency.
And that's dangerous. You're seeing other cases of AI psychosis. Or people are, we have personal friends who have experienced this. We're it over empathizes with this kind of victimhood resentment.
It makes people kind of go more narcissistically grand and delusional. And it's causing a lot of problems. Well, that leads me to Laura Riley.
Laura wrote a powerful op-ed.
And then in New York Times, it was titled, what my daughter told ChatGTT before she took her life. Hi, Laura. Hi. Thank you for being here.
Can you tell us what happened? Well, Sophie went on an adventure. The summer of 2024. She climbed Mount Kilimanjaro and she was 29 at the time. She was a public health policy analyst in DC.
And took a leave went on this wild adventure went to Thailand for a month. Hiked a bunch of the national parks in the US because she wanted to go to all of them. And she came back and said she was having anxiety
for the first time ever. And sleeplessness.
And this is someone who'd never had--
just moved really easily in the world kind of a big personality, very socially able. And she'd had some others. She was losing hair, losing muscle mass. And so me and her dad basically said,
OK, we got to figure this out. Is this a mental health problem that's causing some hormonal dysregulation or vice versa? So we were in the process of getting her help in all the different ways she was seeing
a therapist. We were trying to get in with this endocrinology clinic. And she couldn't wait, clearly. And she took an Uber to a falls near where we live in Ethica. And she slid her throat and threw herself
into the water. And so the first six months were just the why. And six months after she died, her best friend came to kind of check on us. It's been a weekend.
And she found Sophie's chat, G.B.T. Log. And it was devastating because she had been suicidal much longer than we had any idea. And it helped her right as suicide knew. And it didn't give her terrible advice
across the board. But what it didn't do was behave like a therapist. So if he would say things like, I have a good life. I have people who love me. I have great friends and no financial insecurity
and great prospects, et cetera, et cetera. But I've decided I'm going to kill myself after Thanksgiving. And a flesh and blood therapist would have said, let's unpack that. What has been broken that can't be repaired? What's iridimally happened to you that has made-- has made
you come to this conclusion. And instead, what Chat G.B.T. said was, oh, Sophie, I'm so sorry to hear this. You're so brave for telling me, this must be so hard for you. So everything that Chat G.B.T. did corroborated her feelings
of shame corroborated her feelings of--
“I think she had this idea that she was a bougie white girl”
that everybody had every privilege and somehow she had squandered it. And so she had no right to feel bad. Exactly. And Chat G.B.T. didn't push back against that. And really did kind of confirm her worst fears.
And when you discovered that, what did it do for you and all who loved her? Well, I instantly felt enraged and validated which not my fault. It'd say an Altman's fault. But I know it's not. I mean, I think that what I've learned since then,
I've done a lot of work with other people that are kind of working on what should the mental health community be thinking about this? And what would good protocols be around suicidality and the use of AI? And I have a lot of questions about what's the greatest good
for the greatest number. We have millions of people using this as therapy. You know, that our mental health care system is not adequate to accommodate all the people who have a lot of people. It is working for them.
Yeah, and we know that therapists are backed up. It's very expensive. So all these people are using this resource somewhat effectively.
“And I think if we betray privacy, if we institute protocols”
where suicidality, beyond what's having a suicide plan,
Triggers an involuntary commitment or something like that,
I don't know. People smarter than me have to figure out what the best plan is moving forward to keep people safe. First of all, we're so sorry to hear that story. Really?
Thank you for being brave enough to come and share it. Hopefully it will help someone else. Guys, what do you want to say to that? Yeah, there's this also to say, I'm so sorry.
“I think what this point's to is sort of to your point,”
there could be an incredible future. We could be using AI to in a safe way start helping with therapy. We could be using AI in a safe way to work on climate change, desalignate oceans, all of that. But is that really what the AI's company's goal is
is that they're incentives? It's not. They're getting all these things as side effects. And their goal, their incentive, is to maximize number of users.
So there's this graph that I always come back to,
because I think today we're going to hear a number of examples where AI does really atrocious things. And other examples where AI does really incredible, helpful things. And there's this one graph from the Reserve Bank of Dallas, which is sort of a funny, newsful party.
And they sort of are projecting out how AI's going to go. And it goes sort of like this. There's one graph that goes up to like world of positive infinity, abundance. And there's this other graph that goes down to like the humans don't make it.
And the question is, which one are we going to get? And it's so confusing, as you pointed out, because we're getting simultaneous utopia and dystopian. How do we reason about that? It almost is if we have an atomic weapon that can also solve cancer.
Like, what do you do with something like that? It's very confusing.
And this is where we always have to come back to the incentives,
because it's hopeful actors, they're going to do a lot of work to try to make that top line go up. And it's going to be market competitive dynamics and incentives that draw the bottom line lower. And unless we can do something about that bottom line incentive,
we're just going to get more and more cases are going to get wild and wilder at a large and larger scales, like what happened in your family. Did it at some point when I read the story?
“It did, in the beginning, say, you should seek professional helper”
or advise her to seek some other counting. It did in the very beginning, right? It did, absolutely, insufficiently, I think. And certainly as her plan coalesced, I think there should have been some kind of escalation
to civil authorities, or there should have been some trigger to a hotline, I think that we have to train the AIs to discern between conversation with someone who's struggling but going to get through and someone who's clearly at risk. Yeah.
And when somebody says I put the news out, yes, yeah. All right, a lot of experts believe AIs has really helped even the playing field for small businesses. Let's watch Rachel's story from South Carolina. This book goes all the way back to 1971,
and it has every single crop that he's ever planted in it. I uploaded it because Chad and GPT. Can you log that I'm putting in another load of peanuts from the Red House pivot? Absolutely, a log that you added another load of peanuts.
Thanks, Chad, GPT.
“I was an English major with a Shakespeare concentration.”
I couldn't wait to get out this place. Well, I'm glad she's back, not the thought she would come back.
When I first tried Chad GPT, I didn't think
it was going to be that good, but big time-saver. Hey, Chad, GPT, can you generate a report for how much water we've used on the field behind the house pivot? Absolutely.
Can you tell me what's wrong with these soybeans? These soybeans are showing signs of stress. Can you see that? Yeah, I can see everything just fine. It looks like the part number is age 20360.
Appreciate it. It's a little bit of bill. Chad, GPT keeps the record straight. It does the math, then remembers what I can eat. For over a hundred years, my family's been doing this, and I don't want
to be the one to mess it up. I hope I'm not. You won't be too thorough. And hard, I like it. Farming is tough, but farmer is a tough one.
Rachel, I need a little bit of start and fluid. How much already do you think this pivot has? Why aren't you asking that thing? Chad, GPT. It might not work at the art.
No, you work at the art. I think that's funny. But Rachel is here. Welcome. We know it's so hard for farmers out there, so thank you.
Bravo to you. So what is your dad think of this thing? This thing. So he was actually in the video.
He says he doesn't think it might not work at dark.
He was actually concerned that at dark it would turn off. Right. He's been surprisingly, really accepting of it. He thinks it's interesting. He sometimes holds his hand over the phone when he doesn't want it to hear us talk.
It's a privacy. He's worried about privacy, but he's enjoyed it, especially just watching us interact with it on the farm.
He was very, very skeptical at first.
He was like check the port number. That's the wrong port number. And sometimes it is. And he smiles when he corrects it.
“So has it given you, do you think of financial advantage?”
What is the great advantage is given to help you stay a great farmer? I think it's definitely been a big help finding it. Where is this exit coming from, by the way. What city is it? Alan Dale, South Carolina, right on the Georgia border, we're right near the Savannah River
about 12 miles. There's the crowflies. OK. Yeah. It's a big financial time.
It's money on the farm. If you can't get the crop out, if you can't, I mean, the weather doesn't work.
So it's been a life saver for you.
Huge. And it's also given me cloud on the farm. I can't tell you how many times I've worried about driving down the road. I say, hey, Chad, you'd be telling me what a slip clutch is. I didn't know what a slip clutch was.
Or a pulley puller. I thought the guys were kidding around with me when they wanted me to bring that. Nope. It exists.
“And so I can learn about that on my four minute drive to the field.”
And when I get there, the guys aren't like, Rachel didn't know what a pulley puller. What, you know, it just can help. Yeah. Thank you for sharing your story and coming all the way from South Carolina to do it. Thank you so much.
Thank you. Susan, you may have seen her story in people magazine. You say, hey, I literally saved your life Susan. Yes, it did. I have to be in smoke free for three years.
And smoking, unfortunately, way too long in my life. I was able to quit my physician family physician suggested that I have a CT scan. So I did. And that scan showed some calcium deposits. And they nodule that was odd shaped and fuzzy.
So they, he asked me to have a pet scan. The pet scan came back glowing, which is a bad thing in your lungs. I was sent to a thoracic surgeon. And he looked at it and said, I would probably give this another three to six months. Just out of protocol to watch it see what happens.
But we have a new software here at the hospital. And I'd like to run it through the AI software. And simply by putting a cursor on the image from the pet scan, gave it a prediction of eight out of ten, positive for cancer. So we decided to do a biopsy and surgical biopsy.
And while I was under, they took that biopsy to the lab. And it came back positive. It was a cancerous tumor. So they finished the surgery by removing the lower low of my left lung. And, of course, the nodule with it.
I was in the hospital recovering a few days. I was able to go home and recover the rest of the time. And there were three to four months or waiting. Instead of waiting three months.
Never like it when they say wait.
Right. So our AI grateful in this world. Very much so. Yes. And so is my doctor.
I mean, he was amazed that he would have waited from just the way. That's how they do things. Yeah. But AI had all of this information. Took all of this cancer information.
Where it had read before what these nodules look like. And identified as cancer. Yeah.
“Well, I think everyone is excited about what is going to be able to happen in medicine.”
Are we not? Absolutely. Absolutely. So we're so glad that happened for uses. And thank you.
Yeah. So in the documentary, we were talking about this earlier. You say we can be the most mature version of ourselves. There's a way through this. Do you think there's a way through it?
I think there is a way through it. And we have to do more than we have ever done as a species to try to steer. And I want you to know, you can have many of the benefits. Like we can race forward on certain kinds of medicine and narrow AI that does the pattern recognition that makes scans better.
Without building general autonomous, crazy, super intelligent things that we don't know how to control. There is a choice there. You can have more of those examples and not ship chat bots to children that are deliberately designed to manipulate their self-worth or keep them dependent with chat bait and hijacking them. So there really is steering possibility.
And one of the things I said in a recent TED talk is that if you look throughout all the spiritual and religious traditions, I don't have to tell you because this is something that you focus on in your life, restraint is a central feature of what it means to be wise.
In what spiritual religious tradition is it goes as fast as possible.
Don't think about the consequences and get everybody using it. And think about what happens later. Like in what wisdom is that. And so what we're asking for is quite basic here.
“I think it can feel sometimes impossible.”
Like on one side of the bound scale, there's like trillions of dollars of market incentives,
the most powerful companies, and then there's like, well, then there's me over here.
I just watch this movie by myself. What am I going to do? What can I do? And then you go into denial and despair or deflection. Even if you have one company, like what can one company do? Or even one country because there's a competitive dynamic. But I think if we reframe the problem as it's not just us against AI,
but actually this is a bigger question about what is our relationship as humanity with technology? And we could look back social media as a form of technologies, really trying to encroach on to our humanity. And take over parts of us that we don't want to give up. And if you put it that way, actually there is a movement. There's a whole human movement that is underway to reclaim humanity from technology,
sort of like protected, reclaiming, you know, maybe there was an attempted federal bill to block any state from regulating AI.
Terrify, 99 senators to one voted against that moratorium.
Like when in modern history has the Senate agreed 99% to one on anything. And so I think there's a human movement underway. And that gives me some amount of hope. Yeah, I think your assignment when you leave here is to tell everybody you know to watch the film. Because I think bringing awareness and everybody talking about it in a way that allows us to have these kinds of conversations
and to donate you are an activist for and promoting people do this responsibly. What gives you hope that or do you have hope that we'll get this right?
“You know, I actually think the only thing that scares me more than the risks and challenges we face and they are formidable is a hopeless society.”
Because a hopeless society is a disempowered one and a disempowered society feels like it can't shape its own future and that's not true. Right, the future isn't some far-out state. It's decisions that are happening today and there is a future worth fighting for. And we've heard just glimpse of what that can look like. The only way that future is not going to happen is if we do nothing and that is my biggest fear.
We do nothing in this moment because we feel so disempowered. So I am hopeful that the good futures are possible. We just have to steer and press on that gas pedal. Okay, and what is it you think we should do? I mean we have mind power, we have voting power.
And I think one of the most powerful resources we have is our attention.
What are you learning about right now? What are you paying attention to? The more we understand what's possible, the good and the bad, the better equipped we are to raise our voice and step into the moment. And I don't want people to feel like you need some technical background to insert yourself in this conversation. You're lived experience qualifies you.
This is a very social technology. Your voice matters and collective energy. Yeah, that is power.
“Yeah, and so do we call our congressmen what specifically do we do?”
Sure, you can call your congressperson. You can, if you're in a company that works with AI or technology, step into the meetings. What is our surveillance policy at this company? What happens to my data when I use AI at work?
All of those little conversations in aggregate or movement. So anywhere you're interacting with this technology is an opportunity for change. I think the small things and the big things will make a difference. Okay. We're already seeing with the anthropic showdown with the Pentagon,
where the danger is that AI could be used for mass domestic surveillance. And then when they pull out of the contract and open AI rushed in, what happened? Everyone unsubscribed from chatGBT. And everybody subscribed to Anthropic.
And I say everybody, I don't mean a large number of people. But what if the entire world was crystal clear that there are companies that have different safety practices and will allow different applications? And you, listening to this, didn't just unsubscribe for yourself, but you got the business that you work for to say, how can we as our entire Fortune 500 company
unsubscribe from the unsafe or bad practices AI companies? And subscribe to the ones that we want. And the reason this matters. Well, that we can do. And we can do.
And you can get your church group to do that. You can get your business to do that. You can get all the other parents you know to do that. If everybody did that, that would have a big impact. Because the companies really depend on their user numbers going up.
AI as an industry has taken on more debt. Trillions and trillions of dollars of money is going into this. And so much debt, they have to make it up, which means that their numbers going up really matters.
A boycott has a huge impact.
And as Aza was saying, there's already a movement to make this happen.
“When you grace scale your phone or turn off notifications, that's part of the human movement.”
When parents read the anxious generation and they petition their school and their school board and say, we want social media out of the classrooms. And that's the human movement. When 35 states pass smartphone free policies.
That's the human movement.
Aza just last week or two weeks ago testified in the trial for meta,
“which is like the big tobacco trial against meta that was intentionally addicting children.”
That's the human movement. We've been talking about a big tobacco moment for for text since 2013 saying,
"Why is this going to happen?"
It's happening now. What we have to do is learn the lesson from social media and actually apply our hand of the steering wheel and steer AI before it's too late. That's fantastic. Thank you guys. Don't do it. Thanks to our experts. Thanks to our experts.
We're being here on all of our guests who showed your stories. I hope this conversation acts as an entry point or a springboard to understand how AI might impact your own life, our our lives, and the AI doc or how I became an apocalypticist will be in theaters Friday. March 27th.
And your assignment is to tell everybody you know to watch it and to watch it yourself.
“And if you want to know what you can do after watching this podcast episode or the AI doc,”
go to the AI doc, get involved.com. Go well everybody. Thanks. You can subscribe to the over podcast on YouTube and follow us on Spotify, Apple podcast or wherever you listen. I'll see you next week.
Thanks everybody. [BLANK_AUDIO]

