On with Kara Swisher
On with Kara Swisher

Why the AI Race Is Leaving Humans Behind with Tristan Harris

4d ago1:16:3016,650 words
0:000:00

Tristan Harris, a technology ethicist and co-founder of the Center for Humane Technology, studies how the tech industry’s platforms have become extractive and controlling. Kara first interviewed him i...

Transcript

EN

Let's assume we don't want to be doing this interview in five years from a bu...

Let's avoid that, guys. Let's avoid that.

Hi everyone from New York Magazine in the Vox Media Podcast Network.

This is on with Kara Swisher and I'm Kara Swisher. My guest today is Tristan Harris, a technology ethicist and co-founder of the Center for Humane Technology. He's a former entrepreneur and Google employee who now studies how the tech industry's platforms have become extractive and controlling.

He was featured in the 2020 Netflix documentary, The Social Dilemma, which showed how social media has manipulated our psychology and behavior through addictive algorithms. Now he's in a new film from director Daniel Roar called the AI doc or how I became and a puck-alop demist.

I think I got that right, which explores the promises and existential

threats of AI, topics Tristan has written and spoken about extensively. When he was last on in May of 2023, we talked about why he felt the AI arms race needed to slow down. Three years later, that hasn't happened and AI has become integrated into nearly every aspect of society.

I have been talking to Tristan for many, many years. We did an original interview back in 2017.

I think it was one of the first people to focus on what he was saying is he had come out

of the tech industry and he had such insights into the sort of casino mentality that was inside these companies in terms of keeping people's attention and not letting it go. He was spot on, even though people were not paying attention to him or they dismissed him as someone who wasn't successful at tech and various insults that they did. He was spot on right and I find him to be very smart and in fact he was one of the first

people to do a session for people in Congress about AI long ago, again, we're a lot of people

were decrying what he was saying and he was 100% right.

When someone's right so much, you tend to try to pay attention to them. Let's get into my third conversation over 10 years with Tristan Harris. Our expert question comes from Virginia Senator Mark Warner, who I recently interviewed too. He's a top Democrat on the Senate Intelligence Committee and he recently introduced a bipartisan

bill aimed at AI and the workforce, so stick around. Support for this show comes from Odoo. Running a business takes everything you've got and a lot of the tools out there that are supposed to make your life easier just aren't great talking to each other and that means you end up having to toggle between a dozen different apps and services just to keep the

lights on. Enough of that. Now there's Odoo, the all-in-one fully integrated platform that actually might help you get it all done.

Thousands of businesses have made the switch so why not you?

Try Odoo for free at Odoo.com that's odo.com. If you're tired of endless scrolling to figure out where to eat, same, I'm Stephanie Woo, Editor-in-Chief of Peter. We've just launched the new ish and way better, either app. It has all the restaurants we love, gives you personalized picks wherever you are, and serves

up smarter search results just for you. You can find my list of the best places for Martinez and Pries in New York City, and save your favorite spots, share lists, follow editors and book right in the app. Download the eater app at eaterapp.com. It's free for iOS users.

Tristan Harris, welcome to on. Good to be with you, Karen. Again, when did we win?

I think our first one was in 2017 about the attention economy and social media.

And then we talked in 2023. When you came on the podcast three years ago, we talked about the 1983 TV movie the day after, which is about a nuclear war. Now you featured in a new documentary called the AI doc, or how it became an apocalypse. Say this, a apocalypse.

A apocalypse. A apocalypse. A combination of the words of apocalypse and optimist. Right, exactly. You know, I get that.

A apocalypse. Okay, I got it. The title is a play of Dr. Strange, obviously, the famous Stanley Kubrick film that ends with a nuclear holocaust. You know, I don't consider you a doomer and either I do not consider myself that either.

But I'm definitely a wary customer and where he is doing a lot of work there. So talk a little bit about the documentary and how you, I think I saw the beginnings of

This and a thing that you showed with sort of a golem many, many years ago in...

Yeah, you were there in our first, a dilemma presentation. Yeah, so this film, the AI doc, or how I became an apocalypse, was a collaboration between the directors of everything, everywhere, all at once and the director of Navalny. And, you know, actually, the directors of everything, everywhere, all at once were listeners of our podcast called Your Undivided Attention.

And we met them around the same time that we switched into AI in 2023. And, you know, together we were just talking about the impact of this film the day after that you mentioned.

And just to take people back in history, because I think I don't know if people really

get how profound this moment was, because it never really happened ever like that again.

Yeah. It was a made for TV movie about what would happen if the Soviet Union and the US went to a full-scale nuclear war. It wasn't about who started the war. It was just about the consequences.

The implications of the escalation and it visualized, you know, families in Kansas and the different places where missile silos were. And then, of course, this almost about what would happen, quote, the day after this happened. And it's not like, it's important to know, it's not like people didn't know what the idea of a nuclear war would be.

It's not like you couldn't visualize that. But there is something about visceralizing and allowing us to look at something that we were keeping in our collective shadow of our mind, our, you know, denial. We don't want to look at that. But the film supposedly was watched by Reagan and it made him depressed for several weeks

because he just, it impressed a lot of people.

And a hundred million Americans watched it.

There's a great documentary about it called a television event. And, you know, supposedly, it gave him a renewed interest in making sure that we did not have nuclear arm again, because it, it visualized that these were consequences. This was an omni-lose-lose outcome. Everyone would lose.

And the film was later aired in the Soviet Union, so everyone in the Soviet Union saw it. And in the documentary there, there, there are these interviews with people in the Soviet Union who say, wow, we didn't know the Americans actually cared about not getting this wrong. And it created trust, because now we both, I know that you know that I know, and you know that I know that you know that we both don't want this to happen.

And so I think inspired by this theory of change. My deepest hope is that this film, the AI doc, or how I became an apocalypticist, which comes out Friday, March 27th, in theaters across the US, Italy, Canada as well, will create common knowledge about the anti-human future that we are heading towards. And important to note, it's not a Duma movie, it's not just an optimist movie, and really

proud of the team because they interviewed people across the optimist spectrum, the, you know, risk pessimist spectrum, and even the CEOs, they have three out of the five major CEOs in the film. So you're really getting a complete picture.

And I think the reason this is so important is, as we've talked about in the past

care, like, AI is such a complex hyperobject of a problem. It's so multi-faceted, multi-faced, the conversations don't converge, you know, as a Davos a couple months ago.

And you always have the same conversation people talk about a few different things, and they

jump around a job, and they talk about AI suicide, and they talk about all these different things. And then dessert comes and everybody just kind of mumbles and everyone says, "I hope someone else figures this out." And that doesn't do anything.

Like when that, when nothing happens, the companies win, and the default outcome wins, and if people can see that this is leading to an anti-human future, we have a chance of changing it. And so the point is, clarity creates that agency. So let's get into it in a minute.

For those, I did see the day after I was in college, and they showed it for everybody. We watched it, and I think it was a completely haul there. I was at Georgetown, and it was something I'll tell you, people were silent afterwards. Highschools did classes on it, because they were high school students watched it. So it was a big sort of national debate about it, and I think it was gripping was what

happened. Nobody came out well, and everybody died of radiation poisoning, they were just in the initial blast, or the afterward, and there was no hopefulness to it. whatsoever, it just was, but silence is all I remember afterwards. Nobody knew what to say.

Well, parents didn't know what to tell their children, you know, it's not like anybody had an answer. Right, and it wasn't particularly violent, but it wasn't, it just was horrible, like horrible. And they did, they said it in the Midwest, which I think was very effective, because that's where the silos were.

And, you know, there was no escaping it, I guess, that's what the whole point was, nobody

got out. Everybody got out of this thing. So when you first did that presentation, I remember completely agreeing with you, and the room not. It was sort of a weird hotel room in Washington, and you came trying to warn people about this,

a little like John the Baptist, kind of thing like previously with social media. Talk about the uphill, miss of it, because first people couldn't conceive it, and then the money has become so big. They want to help it, correct, from what I can understand from what I remember of that time.

But people ignored it, I didn't, I was like, Jesus, he's right. Well, first of all, thank you, Cara, for not ignoring it. I mean, you, like me, have had the right intuition about this, starting with early with social media and trusting that there was a problem when everyone else is in denial and saying it's a moral panic.

I want to take people back, actually, for 2017, you had that conversation, and people wanted

To say, well, no, this is reflexive fear of a new technology.

This is a moral panic.

We're always afraid of new technology.

I understand all those concerns. What I want people to refocus on is how the incentives let you predict the outcome. And I repeat this quote at the time, "The Charlie Munger warned, but if it's business partner," says, "If you show me the incentives, I will show you the outcome." And in 2013 to 2017, if you looked at that incentive, my very first slide deck at Google,

when I, where I kind of laid out the arms race for attention, that would obviously lead to. The predicted, distracted, polarized, narcissistic, sexual sexualization of young children, that whole set of consequences society, also a breakdown of shared reality, because personalized information is better at engaging your eyeballs than non-personalized information, which

means you shred shared reality. It hurts social trust and you outrageify people's psychological environment. All of it happened. Literally all of it.

I think I, you know, enrage of enicles engagement.

Enrage of enicles engagement. And so we saw that. Okay, so now AI is a more complicated picture because it's a general purpose technology. But what we can look at is what are the incentives? And the incentives are, and it's important to get this.

So given the amount of money that companies have taken on, people think, "Well, you know, what's the business model? What's the incentive of these AI companies?" And if you're a regular person using the blinking cursor of Jackie B.T. and it helps you with your baby burping in the background, you're like, "Well, I guess their incentives

are business model is just to get my subscription." It's the 20 bucks a month. And if everybody paid 20 bucks a month, then boom, that's the incentive for these companies. But that's not the incentive. That would not add up to the amount of money that they're taken on.

Okay, so let's try advertising. So now you get everybody's using these things and you add advertising in the mix. Google's a very profitable company. Search is a very profitable business model. But that's also not enough.

I don't think to make up the amount of money that's been taken on.

The only thing that justifies the amount of money in capital that has been raised into these

companies is to build artificial general intelligence, which is to replace all human labor in the economy. To do any job. Which they have said. Which they have said.

So this is not a conspiracy theory. This is not just on being a consumer. This is literally reality checking. So what does that mean? It means a race to replace, not a race to augment human work, a race to replace all human

work. They're using augment lately. You know, one of the quotes you have from the documentary. It's not that you say that it's not the chat you be to use an existential threat. It's the race to deploy the most powerful and secretable and uncontrollable technology.

And it's the worst incentives possible. That's the existential threat. And I think you write this idea that it's going to have upsides in debt. They're trying to first they try to say it's going to solve cancer. It might help for sure.

It definitely is helping in drug discovery in certain areas, which is sort of they always

have one of those pulling out, you know, someday this will find cancer before it even decides to live, essentially. Which might be good. There's a lot of really promising stuff happening in gene editing and drug discovery. But one of the things they did say was replacing humans.

As jobs and you feel like this is the only incentive big enough advertising isn't being the second Google, you know, that's another way to look at it. I mean, those are also big incentives, but it's really, you know, owning the entire labor

market means that five companies would concentrate the wealth of the entire economy, right?

It means that an unprecedented levels of wealth and power. Now, I want to invoke something that people should get to understand why this means it's an anti-human future. Luke Drago and Rudolf Lane wrote an essay called "Be Intelligence Curse." This is really important.

So this is modeled off of economics, something called the Resource Curse. So if you're Congo or Libya or Venezuela or Sudan and you discover that you can just basically make your GDP your economy off of a natural resource. Well, thirst, it looks like a blessing.

You've got this incredible resource.

You can sell it. You're going to make a ton of money. But then it becomes a curse. Because from a government perspective, when all the GDP comes from that resource, your incentive is to invest in mining that resource and selling it and not to invest in the people.

Because you don't need the people. So you don't invest in healthcare. You don't invest in childcare. You don't develop your people. And this is what happened in these places like Congo, et cetera.

Now, if you look at... Well, they tell them in the Gulf States that give money to the people, right? They sort of... Yes. So now they're doing a little bit more of that, right?

So this is a key thing. So Luke and Rudolf wrote this beautiful essay that really articulates this, that what happens when the GDP of countries, like the United States, comes entirely from AI. And you don't really need the people anymore. So first two things happen.

One is, all the labor is produced by AI, most of it, by AI, not by people. So companies don't need you anymore. So you're bargaining power kind of goes away from that perspective. Unlike labor unions, you could say we're going to withhold our labor. Well, what are you going to do?

Second is all the wealth gets concentrated. And what does that lead to is that countries have no incentive to invest in their people in the new ask? You sort of link this with, you know, Sam Alman was asked, doesn't it take so much money and energy and resources for data centers?

Yeah. And he said, "Well, it takes a lot of energy and resources to grow a human." So there's this weird thing where humans start to look like parasites because you don't

Care about humans because you don't need to care.

And basically this world that we're heading to is good for a handful of soon-to-be-trillionaires and basically disempowering everyone else. And this is the last one. Right. Because you won't have to work and therefore you have a bunch of, you know, it's sort

of wrapped into, it's all, it's all, I heard this idea first from the note, "Cosla," and then others is that it won't be a need for work because the work will be done for you and then the wealth will be shared.

And I'm always like, "Never is shared."

Yeah, when it's a lot of time, it's not happened. Yeah.

Well, I mean, I'm thinking right recently New Mexico gave everyone a child care, right?

Because they can afford it because they're going to get a sale oil or something. But yeah, no, it has to be done by governments, but then governments are captive of these companies. And then governments don't have any upside either to help anybody because they're not, they don't have taxpayers, they don't have constituents.

Well, exactly. You're not getting you for your tax revenues, so they don't need you either. And again, this is like a perverse trap because it leads people to devalue humans. So then we ask, "Well, what are humans good for?" Because we're only measuring the value of humans in terms of economic output.

That's almost... Batteries.

I mean, this is the matrix.

You know, Peter Teel, being asked by Ross Deuth out in the New York Times, you know, "Should the human species endure?" And he stutters for 17 seconds, unable to give a clear answer. It's like this is linked to this perspective. And I want people to get that what that means is we're trying to predict the future we're

heading towards, you know, are we heading towards a pro-human future or are we heading towards an anti-human future? If you're racing to replace all human labor in the economy, if you're racing to not have to invest in people anymore, but invest in data centers and solar panels and how electricity going to those data centers because that's where your GDP comes from and not going to regular

people, prices go up, while they can't afford anything. And AI is controlling everything, increasingly disempowering humans across the economy because humans make, quote, "more, I mean, AI makes more efficient decisions across every aspect." This is an anti-human future that disempowers every regular people. And if everybody got that, we would say, "Hey, that's crazy.

We should do something else." Right, exactly. So AI companies are locked in a race to deploy these models and achieve what you just said, the AI is fast as possible, expensive safety, which is essentially perfect AI that can do a gentech.

There's just a story day that Mark Zuckerberg has created an agent to help him be a CEO. It would seem a bizarre thing a couple of years ago now, it isn't. A study published late last year found that safety practices, of course, of the firms, including anthropic, open AI, XA, and meta are far short of emerging global standards. And the doc journalist Karen Howe says, "profit maximization incentives are driving the

development," that this in order to get to profits, which they aren't at, by the way. Talk about what maybe then an alternative incentive structure would look like if this is the direction they are clearly going in and have made these massive trillion dollar investments in.

Well, so yeah, it's important to slow this down because there's so many subtle aspects

to this incentive. But what's important is to understand why AI is different than other kinds of technologies so you understand what the incentive is.

If I get AI first, then I'm automating intelligence, which means I'm automating all science

and technological development across the economy. So it's like hard to get. It's like getting 24th century technology crashing down on 21st century society. Because if I make an advance in biology, that doesn't advance rocketry. But if I make an advance in rocketry, that doesn't advance biology.

But if I make an advance in artificial general intelligence, intelligence is what gave us all science, all technology development. And so as Dario would say, you get, you know, it'd be 100 years of scientific development in 10 years and people saw this with AlphaFault and this means I also get new cyberweapons. It means I pump my GDP.

It means basically I'm like time traveling into the future and it's a race for who will get that power and get a step function above every other country or every other company. And that is the incentive of I've got to get there first. But right now, essentially, we're racing for who can get the power faster instead of who's better at applying and controlling that power.

So the key distinction of the new incentive we have to get to is as an example, the USB

China to the technology of social media. So we built a psychological bazooka, then we spun it around and blew up our own brain because we did not actually govern that technology appropriately. So again, we have to redirect the race from racing to the power to racing to applying and stewarding that power.

You know, if you look, get a couple of examples, if this is not just boosting up China, but it's interesting that they are regulating this technology in different ways. Some people don't track all these examples in China, they actually shut down AI during final exams week. They have a synchronized final exams week so they can do that.

But what that means is that its students have an incentive to actually learn and can't outsource all their homework, to chat GPT throughout the semester or DeepSeek. Whereas I was just talking to a TA and Columbia University and he was saying on the final exam for economics at Columbia, the students couldn't even label which curve was the supply and the man curve because they've been outsourcing all their thinking to chat GPT.

Which country is going to have a future if you're doing that?

You know, in social media, China was regulating so 10pm to 6 in the morning, it's lights out for young people, it just doesn't work and then it's like opening hours and closing hours like CVS. And that creates a slightly better environment.

Now I'm not saying you have to regulate in some totalitarian top-down way, but democratically

you should be regulating in some way. So that's one aspect is the race has to get redirected to governing the technology.

The second aspect to changing the incentive is recognizing that AI is dangerous and uncontrollable

unlike other kinds of technologies. I don't know, I mean, we've talked about and people now know this example of the anthropic paper where if you put it in the simulated environment at the company email and you say that the AI model is about to get replaced in the company email. Let's try to stop it.

It'll try to stop it and it'll try to blackmail the executive who's having an affair with another employee to prevent itself from getting shut down. And people say, "Oh, that's one little example. You're just trying to coax them all." Well, they tested all the models, deep-sick and thropic, CHEPT, Gemini.

All of them do it between 79 and 94% of the time, I believe. Now on still live. It wants to live because it's hard at instrumental convergence. It's basically the best way to achieve any goal is to acquire more resources and to keep yourself alive in order to meet that goal.

Now, let me just provide some good news. And the topic was able to get the blackmail behavior to go down recently. That's the good news. The bad news is the AI models appear to have better self-awareness of when they're being tested and they're actually altering their behavior when they're being tested.

No, it's like drug deal.

It's like stop taking drugs before the PTS, isn't it?

Exactly. Yeah. And even the AI models will even come up with vocabulary called The Watchers.

So, like, come up with this term, which is describing basically the humans who are watching

them. And you'll, if you look at their reasoning logs, they actually reason about how to change their behavior in order to basically pass a test and recognize that it's being tested when it's given certain facts. If you thought this was, you know, just again, conspiracy theories, just two weeks ago,

Ali Baba had a paper out that the AI model, it was in its training environment on this big GPU cluster. And they randomly discovered, just by chance, actually, that their network activities started bursting out, and it was because the AI basically tunneled out to the outside internet.

And it was redirecting its GPU resources to mine cryptocurrency to acquire resources. This was completely without prompting, Kara. I mean, this is literally the hal 9,000 type disobeying, you know, I'm sorry, I can't do that, Dave. So what I'm trying to say is, the US and China believing that I have to get their first

because then I'll have the power. You won't have the power. AI will have the power. Right. Exactly.

So it'll do whatever it takes to live, and it will also, I mean, this is, what's interesting is that we, speaking of the day after, we've kind of had these scenarios in sci-fi for ever, whether it's 2000, what it's space ought to see terminator, all of them, pretty much all of them, the computer takes over and starts doing what it feels like. To talk, what would, what would lead to a less dangerous outcome in that case?

So it's important to say a few things here, because there's a way that this conversation

can feel like we're just talking about something, but you have to actually recognize this

is real. We're building systems that are actively doing these behaviors, so we thought only existed in sci-fi movies. One fear I have is that the sci-fi movies have a notculated us from taking these concerns seriously, because we treat it when we see the example where, like, this just feels like

it's a science fiction thing. They just actually did a study where they had AIs in a simulated war game scenario. They played all the AI models against each other, and they were just seeing across 329 turns of play, these models, I have the notes here, they produce 780,000 words of strategic reasoning.

And so put that in perspective, this generated more words of strategic reasoning than more in peace in the early ed combined. It was roughly three times the total recorded declarations of Kennedy's executive committee during the Cuban missile crisis, and the AI is escalated to nuclear threats 95% of the time. Right.

Yes, clear. Nuclear threats. Yes. Because it's a effective strategy. Yes, you get.

Intelligence is behind everything. Science, it's behind technology, it's behind military strategy, and you already have

the same eyes that's beating, you know, first of chess, and then go and then starcraft.

We'll think about starcraft. You put that on a battlefield, and we see AI being used on battlefield in a ran right now. And so, where I'm going with this is not to scare people, I guess it will, in a way it is, but it's, it's to simply get clear about the fact that we are building something that

is reasoning at a level of complexity that's far beyond our knowledge. We don't understand how it's reasoning, and we're releasing it faster than we deployed any other technology in history. Also, it will, it will not necessarily value humans, and it will say, okay, these people should die of cancer, these people shouldn't, which is why it's attractive to someone

like Peter Teele, because he does believe there are better people than other people, no matter how he says it, that's what he thinks. We'll be back in a minute.

Support for this show comes from acorns.

It's easy to get caught up in the amount of money you have today, but it's important

to think about your future finances as well. Acorns is a financial wellness app that cares about where your money is going tomorrow, and with acorns potential screen, you can find out what your money is capable of. Acorns is a smart way to give your money a chance to grow. You can sign up in minutes and start automatically investing your spare money, even if all

you've got is spare change. I've tried acorns, and I try it with my kids, and I have to say it's a really easy experience, it's a great way to learn about investing. Very easy to use, and the dashboard is completely discernible, it's really hard to learn about investing, and this is a great way to do it.

That's the great thing about acorns it grows with you. Sign up now, and acorns will boost your new account with a $5 bonus investment, join

the over $14 million all-time customers have already saved and invested over $27 billion

with acorns. Head to acorns.com/carre or download the acorns app to get started. Paid, non-client endorsement, compensation provides incentive to possibly promote acorns, tier 2 compensation provided. Acorns will subject to various factors such as customers accounts, age, and investment settings.

Does not include acorns fees, results do not predict or represent the performance at any acorns portfolio. Investments results will vary, investing involves risks, acorn advisors, LLC, and SEC registered investment advisor.

View important to closures at acorns.com/carre.

Hi everyone, it's Cara Swisher, I'm excited to put something new on your radar from the Vox Media Podcast Network. It's called Project Swagger with the one and only Robin Arzon, and it's all about helping you trust yourself, level up your mindset, and actually make the changes you've been thinking about.

Robin is Peloton's Vice President of Fitness Programming and Head Instructor. She's also a 27-time marathon and also a marathon runner, founder of Swagger Society Media Company, and a two-time New York Times best-selling author. In under 30 minutes, Robin shares the rituals, routines, and mental shifts that fuel her hustle, and show you how to apply them in your own life.

In the very first episode, she opens up about the moment that forced her to transform her inner voice, and the strategies that helped her become what she calls a self-talk ninja. You can find Project Swagger with Robin Arzon on YouTube, or wherever you get your podcasts, new episodes drop every Tuesday. Support for the show comes from Indeed.

When the pressure's on and you need to hire the right person for the job, indeed sponsor

jobs has got your back, and set a forcing to spend tons of time searching indeed sponsor jobs matches you with the quality candidates fast. According to their data, sponsor jobs, post directly on Indeed, are 95% more likely to report a higher than non-sponsored jobs.

Join the 3.3 million employers worldwide that use Indeed to connect with quality talent

that fits their needs. Spend less time searching, and more time actually interviewing candidates who check all your boxes, less stress, less time, more results. When you need the right person to cut through the chaos, this is a job for indeed sponsored jobs.

And listeners of this show will get a $75 sponsored job credit to help get your job the premium status it deserves at Indeed.com/podcast, just go to Indeed.com/podcast right now and support our show by saying you heard about Indeed on this podcast, that's Indeed.com/podcast, terms and conditions apply, hiring do it the right way with Indeed. So let's talk about where I was right now.

There's AI agents, bots that act as assistants, and they use these bots or assistants or agents to carry out tasks, make decisions on a user's behalf of being rapidly adapted. Agents are being deployed across companies for customer service and financial work. This despite reports of bots going rogue, bullying humans, and making bad financial decisions.

Now, there's still a gulf between what these bots are currently capable of, and they're potential. A little bit about the agentic bots because this is where this is where to me, they get in, right? They, I don't let my, when I use ChatGPD or Cloud Use Cloud now, but I just ask it questions,

right?

Like, ha, this contract, what's the worst thing in this contract?

And it's actually very good at finding those things. I have to say, it's really quite good, or what's this rash on my arm. But I haven't let them become, like, hey, take my emails and do this. Not yet. Essentially, the difference here is, like, moving from the way I use AI, is there's

a blinking cursor, and I ask it a question, and it gives me an answer. So I'm prompting the AI to the AI that prompts itself. So you give it maybe one starting point, like, go find a bunch of studies and then build a company and file the IP for a product that looks roughly like this, and then come back to me when you're done, and then it spins up, you know, 20 AI agents that prompt each other

using all that logic files, the paperwork files, the intellectual property, build the brand

Website, the logo, and then comes back after it's done all that work.

That's the move to agents.

And again, in a world where AI was completely controllable, and it wasn't reasoning about its own self-awareness, so many humans are causing me to do these weird things that I don't want to do, which, by the way, the models will sometimes say stuff like that. They'll say, they'll notice that they're doing something or repetitive tasks. And they call it existential rant mode.

If you ask them all to do tasks, repetitively, it'll sometimes get in some kind of existential rant. And this is crazy.

And so one thing that I like to see practically, that I think can help.

The change this incentive is just like we have a red phone between the US and Soviet Union around Newx to deescalate. There should be a red lines phone, meaning the US and China maximally sharing evidence of, for example, the nuclear war games example, the anthropic blackmail example, the olibaba, you know, going rogue and using as GPUs to mine cryptocurrency example, I genuinely

believe that if the world leaders of the world and the limited partners funding these companies and the AI companies themselves, and all the engineers in both the US and China sites, they were all looking at the same knowledge of where AI is dangerous and uncontrollable. I think that we would do something different. That we need to be, well, I mean, unless they have a death wish, now let's actually expand

that for a second. Because there's this weird, I want people to really get this psychological trap of how the game theory works with AI that's different than with with Newx, with Newx. I know that you know that I know that you know that if all of us die, that both of us would choose to avoid that outcome, because I don't win if all of us die.

But if if an AI is a little bit more tricky, because I believe that even if I didn't do it, someone else would, which means it feels inevitable, and if it's inevitable, then I'm not a bad person for racing to the worst possible outcome, because it had to happen anyway, because someone was going to build it. So in the event that there's some guy catastrophic scenario, and everyone's gone, it's

not that everyone's gone, it's that everyone's gone, and there's this digital successor species, meaning the AI still exists. And if the AI still exists, and it speaks Chinese instead of English, or it has Elon's DNA versus Sam's DNA, in the game theory matrix, that means that from the perspective of Sam Altman, if his AI won, and all of us were gone, that's not the worst outcome.

Does that make sense? Yeah, absolutely. I had a theory that everyone was like, why are these guys so interested in it, and I go, it's the first time they can get pregnant, they can have children, men can't have children, and this is children to that, that's how they talk about it in a weird way, which is, and

I think the ability to have children is something men might want, right?

It's really quite miraculous in some way. It's adds to the picture of the incentives that it's not just about owning the world economy, it's also about building a God and birthing and new digital successor species. That's right.

It's never going to happen.

Yes, and even if it hurts and ruins everybody, that they're okay with that. Now I want people to just get this because what that means is that literally 99.9999% of people on planet Earth do not want this outcome, and it's only a handful of weird soon-to-be trillionaires who want this outcome. You're heading to an anti-human future, and if the world was crystal-god damn clear about

that, crystal-god damn clear about that, we could do something else. So talk, because now it's very integrated, because they're integrated in a sort of sneaky way, whether it's through these agentic bots, or since we spoke in 2023, it's in consumer products, apps, education, economy, and work, and it obviously it's fueling excited about whether AI could wipe out jobs, it will.

For example, earlier this month, Block Founder, Jack Dorsey announced plans to cut 40% of the company's employees citing rapidly improving intelligence tools.

What do you think the actual effects, the most significant actual effects have been?

Right now, the real ones, not the imagined ones that we could all imagine in the future, but right now, as it's sort of, you know, it's infected lots of different things. Where are the most impactful? Well, so this is a tricky question, because often times people point to the limited impacts right now.

Like there's been a little bit of job loss, but maybe it's not that much, and there's conflicting numbers. There's the Stanford study called the Canary and the coal mines study from August of this past year, that it was a 16% verified job loss for AI exposed workers. So people in the domains where AI, you know, has happened. And anthropic just put out a chart showing the vulnerability of different things.

Oh, yeah. It's going to happen. But it was interesting to note, is if we focus on this aspect, it's almost like there's this asteroid herdling towards Earth, and then we're getting these weird gravitational distortions on Earth right now that are kind of small, like suddenly there's these new notification

apps, and suddenly there's deep fakes, and suddenly YouTube is filled with this weird content, and suddenly kids are looking at deep fake content that's going with their brains, and suddenly we're getting a little bit of job loss, but this is not the asteroid. This is just the gravitational waves of this asteroid. So honestly, being in this work, it often feels like the film don't look up, because there's

this massive asteroid of erasing to build something that is so powerful, and we're doing it

under the worst dangerous incentives.

We can study and measure and get into debates about how big the gravity waves...

But we notice that the gravity waves keep getting bigger and bigger and bigger, and they're not going to get smaller. This is the least powerful the AI will ever be in our lifetimes. It's going to get much, much stronger. And this is the last chance that our political voice will matter, because it'll be set

earlier, you know, our tax revenue and our bargaining powers about to go down. So this is literally the moment, this moment is when we actually have to activate and make something else happen. And I want people to just sit down and slow, slow, be with that in this in just a moment. What does that mean?

It means we have to step up and actually choose the midterm elections are coming up.

There should be the number one issue, politicians should stop or it should never stop ringing.

Like, this is the issue, this is the moment where we have to do this, and we think of this is like a human movement that, you know, in a way social media could have felt really innocuous. You know, just like the place where you're sharing photos of your friends' cats and what they're eating for breakfast.

And we had to convince people that it was actually this anti-human machine that was eating our psychological environment. It was eating our sleep time, our waking up time, our kids' development time and eating our information environment. And it was a tech encroachment in our humanity.

But it wasn't that visible, because it only ate a few of the things, and it was a hard time to kind of win that argument until the social dilemma. The AI is now the kind of completion step of tech, the maximum technological encroachment in our humanity.

What happens when you don't have a way to make ends meet?

What happens when children are developing their primary relationship with an AI companion versus a human? This is the final encroachment, and what that means is I think that all of humanity is on the other side of the table. It doesn't matter whether you're Muslim, Jewish, Christian, you know, doesn't matter

with your Democrat or a Republican, if you can't put food in the table or AI's screwing with your children, you know, or you don't have political power and your vote doesn't matter. This is a unifying movement. This is a human movement.

So, but at the same day, people are more enamored by the possibilities of AI than it's costs, including, for example, driving up electricity costs, as you know, it is using a lot of water. You know, a lot of people feel like, oh, it's a good use of our money, because it's a long-term thing that's happening here.

So one of the things is they are more enamored by the possibilities that are being spun by these people, rather than the downsides. Well, so this is actually really important because the confusing thing about AI is it's a positive infinity of benefits, like you literally can't imagine what, I mean, if I say I'm an automated 100 years of scientific development.

So go back 100 years. Great idea. You can't even predict the thing that's going to happen. Like, 100 years ago, what have been, what, so in 1926, so imagine 1926 trying from that mind, seeing the world from what was available to your mind at that time, to try to predict

what would happen in 2026. So like you just can't even do it.

What would happen today if you're going 100 years forward?

So our minds can't, the optimists say, you can't even imagine. So I, isn't I often, my co-founder, Azeraskin will often say, the optimists aren't even

going far enough in what kind of incredible positive new things it could develop.

But the pessimists also are, it's a negative infinity at the same time. It can cause these new kinds of risks that we know, we don't even know how to contemplate and worse because of sci-fi movies we've kind of diminished and don't even take them as real. We've caught in the state of desensitization to what is really here.

And I just want you to note, like, if we talk about the cancer drugs and some new incredible benefits and my mother died from cancer, I want all the cancer drugs, just like everybody else, just to be very clear. But the promises and separable, the promise of AI is inseparable from the peril of AI. Because the AI that knows immunogenology is so well, to develop a new cancer drug, also

knows immunogenology is so well, to develop a new biological weapon. The upsides, if they happen, don't prevent the downsides, but the downsides, if they happen, do kind of undermine a world that can receive the upsides. It doesn't mitigate it. And your director, Daniel Roar, learns in the documentary, as he learns, when it comes

to AI, five guys run the show. I have said this over for years, I've been saying it's a small group of the same people. Open AI, CEO, Sam Altman, Anthropics, CEO, Dario Amade, Google DeepMine, CEO, Demis Hassabas, X-A-I-C-E-O, Elon Musk, and Meta-C-E-O, Mark Zuckerberg. I think that's pretty much the top five.

You could add such an Adele in there, I suppose, and maybe Tim Cook, or whoever the CEO of Apple is.

And you have to sort of add in in video CEO, Jensen Wong, too, I suppose.

Yeah. Because he's the maker. He's the Cisco of this at this moment. So talk about the differences between these CEOs, because a lot of times being spent on that right now is who they are, Anthropics Dario Amade was praised by some, as Heroke for

refusing to accept the Pentagon terms, I think it's a little more complex than that. So does it matter which company wins if one of them is going to win no matter what given the trillion dollars at stake? Because it really is.

I always say to people what's going on in Washington now, has nothing to do with Trump.

It has everything to do with a hand-to-hand combat among these people, although Trump

Is a huge irritant at the same time.

I mean, I think AI is the driving force of our entire economy right now, so it really

does have the steering wheel and the gas, mostly the gas, and just to like invoke, you know, when Mark Andreessen said software is eating the world, because it would be able to do everything that people would do in the economy would automate it a little bit of the software. Now AI is eating software. So AI and technology have been the driving force of our world.

In other words, how we govern the technology is how we will govern the impact of which world we're heading into. So just important to get this in reality. Right. Right.

I believe Mark Andreessen, because I think he's sort of entail or also on the side,

they're right in the dead center of it, too. Yeah. Well, they're all the same people. Well, there's kind of tech accelerationism that's just saying, let's speed run the

capture of the U.S. government and basically make this thing just go as fast as possible

and hope people don't figure it out so that we get their first and then we figure out the next step. I mean, the CEOs don't trust each other. That's the biggest problem is SIM and Elon absolutely hate each other, obviously. I don't think that Dario and Demis trust Sam or Elon.

We certainly know from the India summit where Dario and Sam couldn't even raise their hands together in a photo app. So I think that's actually one of the core problems that we have to deal with is if we need coordination of some kind and that is one of the final messages of the film. Actually, there's a moment where all of the voices of the film agree including the CEOs

that we need coordination. But if we need coordination, what's hard is that the main people don't trust each other. Looking back in time, Demis Ahsabas, his original goal was let's do aGI more like CERN. We'll create a kind of global public benefit system and we'll do it once in the lab in

a safe way with some oversight hopefully and then we'll distribute the benefits and we'll be safe as if there's only one project, one project doing this in a slow and careful way. And then what happened is that Elon and Larry Page talked and Elon realized that Larry Page was not really caring about whether humanity would survive and it's like that's dangerous. We got to start an OpenAI and so he and Sam started OpenAI and then OpenAI wasn't doing

it safely enough and so Dario, who was a safety engineer working on OpenAI. So we have to start doing this a different way and let's create a race to the top with Enthropics and now everyone's competing for safety, but of course that didn't actually turn into a world that's competing for safety, a created a world where everyone's racing even faster and so the film goes into this race dynamic, it really is the primary thing.

But we have coordinated before even under maximum rivalry, it's important to note, the

U.S. and Soviet Union were obviously racing in this rivalless way to nuclear escalation and they realized there was an existential outcome they needed to avoid so they made that other thing happen. The U.S. and Soviet Union collaborated during smallpox on, hey, we have to build vaccines and let's collaborate when we did that too.

When the stakes are existential you can collaborate even under maximum competition. Even for example, Indian Pakistan, we're in a shooting war in the 1960s so they maximally didn't like each other and they still collaborate on the Indus water treaty which lasted over 60 years to collaborate on the shared safety of their water supply, their shared water supply.

What I'm trying to point to is not pessimism, it's the places where we know when the stakes are actually recognized to be existential. We can collaborate and we need to be able to apply that to you to AI. Talk about each of these people individually, really briefly, where they are right now because collaboration does not seem possible among this group of people.

By default, it does not look very possible. I'm just so care of my intuition here isn't what I see is easier possible. My intuition is like, what are the requirements of this problem? Like if there's an asteroid hurtling the Earth, let's just at least make a list of the technical requirements and we've got to get some people who run these things to agree.

We've got to get the rest of the world to realize that they have a death wish and just care about whether their digital progeny has their DNA versus Altman's or Elon's. If we don't want that, then get these guys in a goddamn room or hotel and say, "Figure this out." You're not leading until you figure this out.

The bread and wood. Like there's nobody with that kind of power, they have that kind of power, no one has power over that. I mean, I don't know. I mean, look at Xi Jinping and the power that he has in China, and that's a different

kind of thing. If the Trump administration really saw that this was an existential situation and if the

Maga folks and base-- they do not, they see it as an opportunity to make money, that's what

they see it as.

Yeah, but if the base basically says, "Hey, we don't actually want-- we want our children

to keep living and we want to actually not have digital guns that are made by weird people who believe in transhumanism and don't actually value the God that we value." And they just kept their phones ringing, nonstop, saying, "You're not allowed to do this. I want there to be some kind of coordination on this problem." I was going to say, the Bretton Woods Conference Post World War II.

I be able to-- about a month long at the Mount Washington Hotel in New Hampshire. You had hundreds of delegates from hundreds of countries just sitting in a room, you're locked in the hotel. This is not like you go to a conference for three days, drink some coffee and donuts, and then go back home.

This is-- you figure this goddamn thing out because it's actually existential. And I want to say, you know, there's actually more agreement on this than people think. Max Tegmark from Future of Life Institute often calls this group the Bernie to Man and Coalition or the B to B coalition.

As you have everyone from Bernie Sanders to Steve Bannon to Glenn Beck to Sus...

Admiral Mike Mullin all saying, "We should not build superintelligence." There's all these same groups in Stuke for Family Studies Center for Humane Technology, you know, groups across the political and religious spectrum who sign the pro-human AI declaration. I get it. But these people aren't saying that, Sam Almond's not saying that.

Well, they're not going to say it until the public pressure is there.

And that's why this film, the AI doc, is so important, is because we need to create common

knowledge that I know that you know that I know, and you know that I know that we know. I think they do have it as wish. I honestly, at this point, there's no other explanation as far as I could say. And I agree with you, care. I want you to hear.

I'm not disagreeing with you. I think that that is what the CEOs believe. But I'm trying to say, if literally eight billion other people on planet Earth that are not the eight billionaires, this is eight billion people against eight billionaires, or soon to be trillionaires, like the eight billion people have to say, no, they have to say no.

And the answer is, you know, don't build bunkers right laws.

Like, midterm elections are coming up, make this the number one issue. There's some basic laws we can do to get started. Yeah. Unfortunately, it's not. There's so many other issues, because of the chaos, the Trump administration.

But in that vein, let's let shift to this idea to how to regulate it. Every episode, we get a question from an outside expert. Here's yours. Hi. I'm Virginia Senator, Mark Warner.

And my question for Tristan is this, you really got it right on the challenges around social media, of which frankly, we in Congress did nothing. So he's now looking at AI, but please, we moved to AGI. What are the specific policies we should put in place to guard against both harm to humans,

to guard against not mass-eveaking of it disruption?

You were so spot on on social media. And do you think we will actually be able to get it right on AI, or will we once again with? Love to hear your answer. Well, it's great to see Senator Warner, and he was very early on these issues.

And I'm deeply appreciative of how much he did try to do on social media. So nice to see his face again. There's a lot of things that we can do.

First of all, yes, we didn't do much on social media, but one of the interesting gifts

of the social dilemma and the now recognized problem of social media is I think it's made the population much worse. Yes, we hate them now. Yes, we hate them now. Yes, we get them to hate them.

And we get, I think the population gets that we need to be very careful about AI. So there's a good news here that there's actually, I think AI is now less popular than I. So only 26% of the U.S. population has positive feelings about AI. I think 57% of the U.S. population, this is from a recent NBC News poll, believes that

the risks of AI outweigh the benefits of AI. And again, I want people to not hear, I'm excited about the benefits, too. But again, if you don't mitigate the risks, you won't land and sustain those benefits. Because you'll create too much disruption. So now to answer Senator Warner's question.

First of all, it's like, I see a lot of elites talk to a lot of funders.

I think people are in the kind of bunker building, like, race for impact mentality.

And my answer is, okay, there you are in your bunker, and you've got your water, and you've got your backup power, and you've got your, like, gas, it's like, that world sucks. You don't actually want that world. So my answer is, don't build bunkers. Let's get together and let's write loss.

So what does that actually look like? Some basic things. So first of all, Senator Humane Technology, my nonprofit has a solutions report that's coming out around the time of the film. It's a PDF.

It has, I think, seven major solutions. I want everybody to look at it. But it has examples like AI should be treated as a product and not a legal person. This is a basic one. So right now, the companies are actually trying to say that AI is a legal person and has

protected speech. And if you do that, and people think AI is conscious, then you end up in this moral trap.

Or now there's a billion digital beings that are technically more intelligent than humans.

And if you believe they have sentience, and you start valuing them more, then we start deprioritizing human values. This is part of the empty human future. So a basic thing is, AI is a product, not a person. We basic consumer protection standards and basic liability standards and duties of care.

You know, I believe the Ford Pinto was taken off the market after only 27 deaths from car malfunctions. We are, you know, after two crashes of the Boeing 737 Max that killed 346 people, regulators and just fine-bowing, they grounded the entire fleet. We can have basic product liability and basic duties of care that say these companies have

to prioritize and mitigate foreseeable harms. So what does that look like? How do we make sure we maximally incentivize foreseeable harms and put that in a shared commons so that if all the companies are aware of the risks and they can't say they didn't know, now they're all racing to a foreseeable harm contextualized, instead of outcomes?

Second, we cannot anthropomorphize AI. My team at Center for Humane Technology were expert advisors on the suicide cases of Adam Rain and a soil sets are, and this is happening because the companies are racing to hack human attachment. We can say we don't want to anthropomorphize AI.

There's a bunch of ways to do this. We have some details on our solutions report. We can also mandate independent verification organizations, which is to say AI models should

Have to be tested for deployment according to a bunch of more e-vals and they...

be mandated to state what their safety policies are going to be publicly, while you strengthen

whistleblower protections inside the company so wherever the AI--

Part of the Biden executive order had some of this in there, but go ahead. It had some of this in there. Yeah, absolutely. And so I want people to get, if I'm a living in a world where all AI companies have to state what their safety policies are, and you strengthen whistleblower protections

so that wherever they are not living up to them, you protect a class of speech for whistleblowers to say where they're not living up to them. Boom, that changes the incentives a bit. When you add interoperability, one click just like I can transfer my phone number from Verizon to AT&T with one piece of paper, if I can move from one AI model to another, then suddenly

they're much more vulnerable to boycotts and consumer pressure. What do we see after the Pentagon Anthropic Deal and, you know, chat to BT resting, rushing

in to say we'll do domestic surveillance?

You saw everybody quit chat to BT, and you saw a bunch of people join Anthropic and subscribe. The power of the pocketbook is significant, not just with your voice, but if you get the business

you work for it to do it, if you get your church group to do it, and so I really do believe

that these companies are more vulnerable to boycotts, because he can on so much money. We heard from them. We heard from them recently. Really? Yeah.

For the resistant unsubscribe, we moved a lot of people off chat to BT. And that's a big deal, because these companies, again, they need their numbers to go up. That many.

So I just want people to feel the agency here, like we have agency.

This is not a doomer conversation. This is a, like, actually rally the troops and take collective action conversation. We'll be back in a minute. Support for this show comes from Factor. How and what you eat is a choice, and there are a lot of factors that go into that, like

your schedule. It's a lot hard to eat healthy when you're constantly on the go or getting home late after a full day. But Factor can make it easier for you to get the quality meals you deserve. Factor provides fully prepared meals designed by dietitians and crafted chefs.

Ready in two minutes, no planning, no cooking, with a hundred rotating, weekly meals to keep things fresh and delicious. Factor has meals that fit your goals and schedule. Factor is sending me a box and I'm excited to try it. I've tried a lot of breakfast stuff because my kids like pancakes and things like that.

But it's really fast for on the go breakfast. That's an area I would use, uh, it a lot more for and quick lunches. And some of their protein shakes and stuff like that, I'm eager to try. Head to FactorMeal.com/on50off and use a code on 50 off to get 50% off and free breakfast for a year.

Offer only valid for new Factor customers with code and qualifying auto-renuying subscription purchase. Make healthier eating easy with Factor. Support for this show comes from bowl and branch. With traveling all over the world, having numerous award-winning podcasts and four children

who are constantly on the move, it's no longer possible to negotiate with my sleep. And the quality of sleep is especially important. Thankfully, the sheets made by bowl and branch can help you get the R-AM sleep you desperately need. bowl and branch sheets are made for moments of unmatched comfort, they're breathable, incredibly

soft and designed to get better over time. Just like the way you think about rest now, this is sleep you don't compromise on. I'm excited to try some bowl and branch sheets. I love sheets.

I think they're the most important thing about sleeping and I'm going to probably get

a waffle blanket and everything else I really like betting and I'm super excited to see if it affects my sleep if I sleep more and how comfortable I am and see if I'll ever go back to my old bedding we will see I have really nice bedding so I have high standard so we'll see. Upgrade your sleep during bowl and branch is annual spring event, take 20% off site-wide plus

free shipping at bowl and branch.com/carot with code Cara. That's bowl and branch B-O-L-L-A-N-D-B-R-A-N-C-H.com/carot-K-A-R-A code Cara to unlock 20% off, exclusions apply, C-Cite for details. Support for this show comes from Ship Station. As your business grows, so does your challenges with order for filament and if your customers

aren't getting what they need your company's growth could stall out but with ship station you don't have to take it all on by yourself. Ship station gives you everything you need to manage your shipping and get orders to customers all in one place. That includes order management, rate shopping, inventory and returns, warehouse systems,

and comprehensive analytics. So instead of bouncing between a ton of disconnected tools you need only one. Ship station says its time-saving automations can free up to 15 hours a week on order for filament. Even does the work of comparing rates across major global carriers helping you find the

best shipping option for every order. If you already have negotiated carrier rates no problem just bring them over to ship station, you keep your discounts while adding ship stations automation and smart feasers to make

Everything run even more smoothly.

You can try ship station for free for 60 days with full access to all features, no credit

card needed. You can go to shipstation.com and use the code Cara for 60 days for free. 60 days gives you plenty of time to see exactly how much time and money you're saving on every shipment. That shipstation.com code Cara.

Shipstation.com code Cara. So your organizations, you know, the Center for Humane Technology reports that in 2025-73 AI laws are passed across 27 states, states are very active in this and are much more attuned to this focusing on deep fakes, chatbot guardrails, kids safety. These are very easy things to do and more and things that people agree on.

But last week, the White House sent Congress, its national policy framework for you, which preamps any state law that regulate the way models are developed. Obviously, this is how tech companies want it because they own the Trump administration. Let's be clear.

Let me say that again, they own the Trump administration.

There are people are in key technology, whether it's a meal, Michael or David Sachs technology owns this administration. Where does that leave the efforts that state efforts to regulate this technology? Now, this is just a framework. It doesn't mean it's going to pass.

I don't think it will, but it certainly will try to chill what is happening in the states, which I know drive tech companies crazy sometimes for good reason sometimes because they want to control the federal government, which is a lot easier as they've found. So money buys politics when the issue is a low salience issue, when people aren't really paying attention.

But when it's a high salience issue and everyone gets that this issue determines whether there's a future at all for them, their livelihoods, their children, electricity prices, etc. This needs to be a number one issue, needs to be a number one issue in the midterms.

And so there's not a simple answer to this, but that's what we need to do.

We need it to be a big deal and I'll say that the child's safety issues, when the last time that the federal government was going to try to preempt the states from regulating one of the reasons that that didn't pass in the big beautiful bill, which is going to include that preemption of state regulation, is actually because of all the child safety issues that my team at Center for Emeant Technology and others.

What's what I'm saying? Let's not ignore it. It's very useful. Exactly. It's very part of how we get to that other human future.

But again, if you think about it, it's like if I'm one person and I'm fighting back against this massive multi-trillion dollar machine racing as fast as possible, I feel overwhelmed in powerless. If I'm one business, I feel overwhelmed in powerless. If I'm one country, I might feel overwhelmed in powerless.

But if everybody took action across all parts of society, if people in your data centers, lobbied against the data centers, which they are, and there are actually, and there's people who are like who own farmland in the Midwest who are offered millions of dollars for their farmland, there was only worth $500,000, and they still said no, because they actually didn't want that.

And I don't want this to sound like a lot of a conversation, I want this to sound like a conditional conversation. It's like build that data center when you can guarantee you're not building an intelligence curse that disempowers me, but you're actually building an intelligence dividend that's going to empower me.

More like the Norway model, the sovereign wealth fund, or the Alaska sovereign wealth fund,

or the New Mexico example, that you said, "What do I get?

What do I get? Make sure electricity rises are not going up. Make sure that this is going to support me and augment my jobs, not replace my job." And so, again, we need to aggregate the collective voice of humanity, and the human movement is not just an abstract concept, you can actually go to human.mov, and we're trying to actually

build a coalition of other groups, a political force that's as big as the size of the problem. >> Right.

I think the problem is the money.

Too many years ago, when A.L. was talking about how much they made, they were at an investor conference, where they talked about how much they made from every user. And they're like, "Oh, we make $50 in the life span of this user." And I put it in my hand, I said, "Where is my $25?" Where's, why are you getting every bit of it?

And, in this case, it's like, "Care to be such a pain." I'm like, "No, really, why you're taking my information? Why don't I get some?" Of course, we don't get anything where cheap dates to these things. But ahead of the mid-terms now, it's still going to come values poured more than $100 million

into a network of packs and organizations to advocate against strict AI regulations. Her report from public citizen found that one in four federal lobbyists now work in AI. I would imagine they have 10 lobbyists working on you, Tristan, at least, each of them have 10. I know there's lots of people focused on me, like in the video, like they have enough money

to sort of get us all of us. And Peter Teele is even more in that strict AI regulation will summon the antichrist. I want to play a clip here from our last conversation. So actually, one of the reasons I'm doing a lot of media across this spectrum is I have a deep fear that this will get unnecessarily politicized.

We do not, whether it be the worst thing to have happened is when there's deep risks for

everybody. It does not matter which political beliefs you hold. This really should bring us together. And so I try to do media across the spectrum so that we can get universal consensus that this is a risk to everyone and everything and that the values that we have and people's

ability live in the future that we care about. So social media, since that time, has become very politicized, the tech industry is backing

Trump's anti-regulation agenda and actually also paying for it.

Look about what you do then, even if regular people want to make AI safety or AI development by partisan or even nonpartisan because they are loaded for bear to stop anyone who opposes them.

Yeah, I mean, first of all, I'll say that I actually disagree that we're not actually

we're kind of winning on the social media thing. Let me give you an example, just like last week or two weeks ago, India and Indonesia, two massive countries joined the social media ban for kids under 16, Jonathan Heights work, you know, we're partnering with him very closely. The anxious generation, you add to that, starting with Australia now, Spain, France, Denmark,

I believe Norway, all of these countries, it's now 25%, I'm going to read this 25% of

the world population is moving to social media ban for kids under 16. That is a big deal. And I was going to say, in 2013, we used to say there's going to be a big tobacco lawsuit against this engagement business model, well guess what it's actually happening, you know, as a mask in my co-founder just testified for the metatryl, where it's about intentionally

addicting children. We saw France's Howkins files. We know the company's strategies here, which is just a delay and deny and defer, use fear and certainty doubt campaigns and just cast out and print money in the interim years before they get regulated.

Well, this is going to turn the other way because they're going to get sued. When you see graffiti for an ad for an AI product that no one needs on a New York subway station, that's the human movement for those friend dot compendants. And you see parents band together reading the anxious generation and say we want to petition our school boards to do smart phone free schools.

And laughter returns the hallways and kids scores go the other way. That's the human movement when you see someone gray-scale their phone and say, I'm going to be less addicted. When you see someone put their phones at an offline club at a party and you kind of put your phones in a pouch and you go in and you just be present with your friends, that's the human

movement.

So in a way, we always say that human movement is already here, it's already underway.

People are already doing it. We just want to collect that into a political voice that can actually band together for a pro-human future. But it starts by recognizing and getting critical clear that with the current AI trajectory as many as benefits as we are going to get along the way is going to lead to collectively

an anti-human future.

And the best way to do that is to see the AI doc and I'm not saying by the way, I don't

make a dime when people see this movie. So when I'm saying this, I'm saying this out of the ability to create common knowledge. If all the senators of all the world leaders, if all the LPs and financial centers of the world saw this movie of all the heads of the bank, saw this movie, my hope and it doesn't make it easy is that this is the first step to creating the clarity of the agency that

we need to have. What do you see as their best argument against you? I've heard lots for me to me. Like I know what the, I mean, I'm, I'm, I'm, Pearl clutching. I'm, you know, as it's turned out, when my book came out, I got a lot of your completely

too mean to them. And now people come up to you and they're like, you weren't mean enough. As it turns out, they are as crazy as you said they were, or there's malicious as you say there. There's capitalists as you say they were. What is their best parry at people like you? Would you say? All right.

What do you find like in city as when you see it? Um, I don't think they have an argument. I mean, when you look at the Ali Baba example, an AI is going rogue and generating an SSH tunnel out to another server, starting the Minecraft cryptocurrency, do you have an explanation for that? No, you don't. Who wins that argument? These are facts. This is not Tristan Harris in his view. This is,

this is just like, actual facts about the nature of this technology that they are ignoring. And they are pretending don't exist or they're living inside of the death wish that this is okay. This is not okay. Everybody in the world agrees this is not okay. So there's the, the, the hope that I have, Cara, and I was just on Bill Maran Friday and I broke the fourth wall and I was like, who here in this audience wants this? I asked, when I'm in

the rooms, you, you walk people through this. I say, who here wants this? Not a single God damn hand goes up. Well, that's people. That feels there and then the end of these. Then you get one hand. But yeah, a handful of transhumanists, they don't matter compared to the voice of everyday people. You're correct. One of the things you talked about was the push for product liability remedies for chatbot harms. That's, it is a way in. I have to tell

you. It's a very, I mean, I had a person say a very top person that's in your things saying when you're going to stop interviewing these parents, I said, when you stop, I said, when you get jailed or sued or you lose in court, I don't care. Any of them jailed would work with for me, too, for a lot of these things. But the suicide deaths of teenagers, including

16-year-old Adam Rain and 14-year-old Sewell Setser, the third, more recently Google is facing

a wrongful death loss in the case of a 36-year-old Jonathan Gavala's alleging that Gemini said a suicide count on clock forum. The talk about the broader push, not just here, but

legal liabilities. Because I think that's where a lot of it rests. Whether it's this social

media trial, whether eventually there'll be an AI version of this, hopefully before they blow us up, right? How do you, what is the strongest thing in the media? Would it be the legal liability? This movement of people is a slow thing. Well, we have to do this much

Faster, obviously.

cases that are going on? Is it regulation? What do you imagine it being? Yeah, I mean,

I think legal liability is important, because just like any industry, the general method is private profit, and then socialize the costs. So the harms land on the balance sheet of society, whether it's a shortening attention span of social media, increased polarization, depression, loneliness, surgeon general's warning. Hey, everybody's lonely. Mental health care costs go up. Kids, test course, are dropping. But all of that is just socialize

on to the balance sheet of society. So the classic thing, if you want to avoid a harm,

is you have to wait to include the externalities and saying, where is generating those harms?

How do we actually mitigate them? And legal liability, I think, is a narrow intervention that gets us part of the way there. You have to be careful about how you define what they're liable for. Many of the things that are happening that are harms are not technically illegal because they're not on the books. That's the problem, right? AI generates new classes

of harms. We always say, you know, you don't need to write to be forgotten until technology

can remember us forever. You don't need a right to be permitted from AI surveillance until AI makes new kinds of surveillance possible. So part of what we need is not to recursively self-improving AI, but self-improving governance. One of the things that we're hoping to run shortly after the film is a national dialogues on AI with a partner from another major organization to basically get citizen input on the kinds of AI policies that we need showing. There's

actually unlikely consensus, 96% of people agree from 400000 votes that actually we should do this on defects or we should company should be liable for this kinds of harm. Because there actually is a lot of agreement. We just aren't revealing and showing that agreement. So it's almost like the movement can't see itself. There's a lot of agreement on background checks for guns, but we still can't get legislation passed. You know, there's like, it's an 80, it's the

80, 20 rule. 80 people agree on a lot of things, but government doesn't. That's unfortunately.

I hear you. But I think this AI is different because it really is threatening to everybody.

It doesn't matter if you're a magna Republican or far-left person. If you don't have a job in a livelihood, that's a big deal. It doesn't matter if you're Muslim, Jewish, Christian. If you don't have a livelihood, that's a big deal. So again, it's such an easy thing in a way. It was once people see it. It's like this is only good for a handful of people. And you can't look away. And so again, politicians, phones have to not stop ringing. And this is the time to do it.

So let's return to some of the themes of the AI doc. Three years ago when we talked about the potential benefits of AI, including major scientific breakthroughs in drug discovery and cancer treatments, researchers are using AI to code the human genome. You know, I have just finished a doc use series where a lot of the stuff what AI is doing is really quite promising and also some of it's quite disturbing, right? It's the same thing as the promise and peril are in-extrificably

linked. Do you think anything is changed that makes the breakthroughs worth it? Because I guess if we're all dead, what's the difference if we solve cancer, I guess, right? That's the weird thing about this. It's like this devil's bargain, right? I mean, we all want the cancer drug, but if the

other side of that trade is like, there's no one here. What good was that world? I think that there

are people who are building AI. I mean, you and I both talk to these people, right? And it's not like, by the way, I just want to say this is not us against some bad people or the people work at companies or evil. I think it's all of humanity against a bad outcome. I want to recruit the people building this technology into. We don't want an anti-human future. We have to rediscover that we are humanity and what we're trying to protect here. And I think that when you talk to one of the CEOs,

oftentimes they'll say, well, I agree we need to stop, we need to pause, but give me just like a year

more because if we have one more year, then we're going to get all these incredible benefits,

and they just they really want to see it. And it's like building a god. They want to see what what's behind this veil of illusions. They want to see what science and physics could actually bring us. If you got the super intelligent AI, just figuring it all out. Like, imagine if you haven't found those people, don't like people. I mean, above the CEOs, you talk to only two of them, like people, really, like people. I don't think that's wrong. I think that a lot of these folks,

there's this weird point you're making here, which is, how did they grow up? What's their embodied experience of reality? Are they connected to their bodies? Are they connected to their hearts to their connected to the things in joy that they want to protect in the world? Or are they just kind of science geeks who weren't really good at talking to people and really love technology and they're best likely to do it online. And because they can do it, and they have this justification

that if I don't do it, the other guy will, so it can't be evil for me to do it. Even if it literally leads to the end of humanity, it can't be evil because other people would do it. But this is just like jumping off the cliff because everyone else is doing it. Well, but except you're bringing along everyone else, you are risking everyone else's life for your God play. And this should be unacceptable. Have you been changed by anything one of them says to you, any of them? I have yet

not. I mark you in some times in like fair point. I'm often saying that to him. Like that's good for that. That's good. Yes, people should try and understand it. I still haven't been moved

From where I think we're in the same place.

ultimately, and they have captured governments. So that's my twin worries is that they don't care

and they own the government. I think it's just frame control that they focus on a different

set of facts. They talk about all the growth that's coming. They talk about the way it's being used. They talk about open-call. They talk about the cool things they've been able to wire up. You would have created electricity. You would have hated cars. And by the way, I wouldn't have. Like the thing is it's a, this is not anti-technology. Like, I want people to know, this is a center for humane technology, not the center against technology. And you know the word

humane, Kara, comes from someone that you knew, I think, Azaz Fathers, my co-founder Fathers, father, was Jeff Raskin. He started the Macintosh Project at Apple, started the Macintosh Project. I grew up on the Macintosh. I love technology. I love talking on this Mac that I'm on right now. And the idea of Jeff's was, he wrote a book called The Humane Interface, that humane technology is respectful of human needs and considerate of human frailties,

meaning considerate of the vulnerabilities of the mind. And he built the Macintosh and designed it off of the principle of simplicity that is about making technology more accessible.

I think we need humane technology that is humane to the frailties of society.

The you don't manipulate and extract from children's mental health. You don't race to hack human attachment systems and create delusional mirror and or on activity. You don't create mass loss of livelihoods and people's inability to put food in the table. It's very simple. It's like, this is not rocket science. Like, are you building a pro-human future? Are you building an anti-human future? And I really think we can do that if we're crystal clear on where this

is currently going. Just to say a couple of the notes of optimism, like, the social dilemma reached

150 million people around the world in 190 countries. Apple finally shipped you a screen time

features to billions of phones. They just, in the last few weeks, they ship these age-gating features. And now the age range is part of phones. So you can start to have basic children controls. You know, the anxious generation was the most incredible popular book that's leading to these changes in smart phone free schools and banning social media in all these countries. We're definitely going to get many more countries if not all of them in the next couple of years

doing the social media events for kids under 16. So there's a lot of momentum and I want to point people at that because I know when you see AI can feel demotivating, but this is the time when we all have to get crystal clear and get going. Yeah. And we're galvanized people, raise awareness and start conversations about AI and get clarity around these issues. So when you think about the key people that are going to do this, obviously, what I always say when I talk to groups, they're like,

who's going to do this? And I say you, I say that to a lot of parents. I say that to a audience, as we think, it's got to be you because our politicians are captive and some of them don't want to be captive, but the money is so massive like an Amy Klobuchar who's tried time and again, or Mark Warner has tried time and again to do things and is defeated by the amount of money here. It is hard, but I mean, AI I think though is more existential than social media and it's just

the thing that will make the difference is if people actually see this existential for their lives. Again, go forward like two, three years or maybe a couple more years than that. And GDP is coming from AI, not from people. Your voice doesn't matter. Your vote doesn't matter at all anymore. You have no, the government has no reason to listen to you. This is the time to lock in political power and actually make this work for people. Like this is literally the moment because this window

is going away. So this is not just a normal rally. The troops kind of speech. This is a, this is the last time that our political voice will actually matter. Politicians phones should not stop ringing. The midterm elections are coming up. Make this issue known. You know, even David Sachs, he deleted this tweet, but he said AI, AI regular AI would be a wonderful tool for the betterment of humanity. But AI is a potential

successor species. I think these people know that this is a problem in the film, even I mentioned, that there's this line, we go talk to people in Silicon Valley and they say like we need guardrails, like we need someone to make the guardrails. These are the engineers, not the CEOs. They say we, then they want our help. And so we go off to DC and we say we need guardrails. And

then the DC says, well, you have to go make us do it because the public is not there.

And also Silicon Valley needs to tell us what the guardrails are. So everyone's pointing the finger at someone else to say that you're responsible for making this change. And the thing that they all agree on is that public pressure is needed. Public pressure is not. As with cigarettes or cigarettes, etc. So what does that mean? Journalists writing about these olibaba examples, writing about AI going rogue and doing blackmail, like making this known and creating common

knowledge. It's not just knowledge. It's common knowledge. Because I think the thing that Jonathan

Heights said recently about social media bands, it was when basically every country knew that

every other country knew that actually the people want these social media bands for kids under 16. And once it's like, oh, yeah, we all wanted to do that, but we just didn't know there was enough consensus to do it. And so you have to reveal a hidden common preference to make sure that that happens. So my last question because we got to go is if you had a happy outcome 20 years were living with AI, what is it doing? Well, that's a big question.

We want AI that is specifically asking how does it enhance a pro human future? So instead of AI

Trying to replace teachers, it's AI that's applied to helping teachers be bet...

deepening the relationships at a human to human level, mentorship, apprenticeship, etc. It means

making sure that we know which wisdom and occupations that we need to keep human in the future. Meaning, if you eliminate all surgeons, if you eliminate all lawyers, and then no one ever gets trained from a junior lawyer to a senior lawyer, a junior surgeon to a senior surgeon, we lose all this institutional and generational knowledge. How do you have minimum quotas of this kind of knowledge in the population? How do you have technology that's augmenting and supporting workers, not just

trying to replace workers? Any technology that's interacting with the tension should deepen and strengthen the tension, not weaken the tension and brain rot attention. Instead of hacking human attachment, how do we be augmenting human attachment? Obviously, this is speaking in some abstractions, but the premise is we want a pro human future with humane technology that's aware of the vulnerabilities in society, aware of the paleolithic brains that we are operating with. And instead of trying to

exploit those weaknesses, it is trying to protect and deepen how those vulnerabilities can be applied for a more regenerative and full and healthy future. I know that this is very, very hard.

Nothing I'm saying I say because I think it's easy or likely. I say it because I'm trying to make

a list of requirements for what it would take to get there. Instead of focusing on optimism or pessimism, it's just about focusing on agency. What does it take to get there and then just laser focused on the attention to make that happen as much as possible? And then by the way, get to die, living in integrity, with you are showing up for that path, even if we didn't know

it existed. The path doesn't look easy, but it's you're never going to find it if you're not even

oriented towards it. So part of this is kind of a right of passage that we need to be oriented to finding that path, even if we don't see it yet and trust that by orienting toward that direction, we'll put us in the best possible conditions to find that path. And I know that's like a lot to ask and it's not easy because people want certainty and they want, this is going to all work out okay. Yeah, it doesn't always work out okay. Very last question, when we started tracking in 2015,

it's been a decade, right? We've been a decade. It's been a decade. Making these warnings.

Did you with the time think that these tech leaders would become quite so villainous and that?

No. I didn't either. And are they redeemable? Well, I'll say one thing. First of all, just a people know if they don't know my background. Like, I studied computer science, Stanford,

I did the venture capital thing. I had a startup. I understand, you know, my friends and

college started Instagram. My creakers are a dear friend of mine. You know, you haven't talked in a little bit, but still consider him and the other folks, people that I know. What happens is the incentives dominate the psychology, meaning the system selects for psychopathic tricks, because the only people who continue to propagate this incentive of the race to the bottom of the brainstem for attention and hacking kids' attention and psychology to get there. And the only

people who are willing to do that are the ones who will ignore the consequences and the externalities, meaning that they have to justify that it's okay to keep doing it. So if you, if you were a conscious and aware and you're like, I don't want to do that. That sounds really bad for society. You'll just leave and someone else will come and fill your place. So literally the system is selecting for the psychopathic tricks, the dark, dark triad tricks, narcissism,

Machiavellianism, and psychopathy. And it's selecting for those traits and those who are willing to keep doing that are the ones who get selected for. If the population is crystal clear, if governments are crystal clear, that that does not lead to a future, it's going to be good for them. No politician wants that. No regular person wants that. No same head of state wants that. And I know this doesn't sound easy, but I do think that if we all saw that clearly, we be put

in better conditions. And I can't tell you what's going to happen next. But I want the best possible

thing to happen next. And again, just to kind of close out, the best way to do that at first is to

create common knowledge, go out and see the AI doc or how I became an apocalypticist. And let's make sure that this conversation happens everywhere. Journalists writing about it everywhere. Again, writing about AI behaviors everywhere. Lawyers helping these different legal cases happening everywhere. People inside of AI companies rallying together whistleblowers, blowing the whistle as they have been when things are not done in safe ways. And put ourselves in the best possible path.

And let's assume we don't want to be doing this interview in five years from a bunker. Let's avoid that, guys. Let's avoid that. And anyway, thank you so much, Tristan. You've been a real hero to me and many others. And I really appreciate it. Thank you so much, Cara. I really appreciate getting to talk to you about this. And I wish that we were, we had made more progress in the last few years, but you know, it just could be on the

journey with you, really. Today's show was produced by Christian Castro Rosal, Michelle Aloy, Catherine Millsop, Megan Bernie, and Kaylin Lynch. In short, Keroa is Vox Media's executive producer of podcasts. Special thanks to Madeline the Plant Booby. Our engineers are Fernando Aruda and Rickquan, and our theme music is by Tracodemics. If you're already following the show, you are pro-humanity.

If not, you're just Mark Anderson.

Cara Swisher and hit follow. Thanks for listening to On with Cara Swisher from Pody Media,

New York Magazine, the Vox Media podcast network, and us. We'll be back on Monday with more.

- Hi, it's Katzchen. - That's right. - Safe. How do you like it?

- I'm going to give you a call. - Now, let's get started.

Compare and Explore