The Rest Is Politics
The Rest Is Politics

What If the AI Revolution Isn’t Real?

1/25/202619:273,179 words
0:000:00

Is AI a revolution, or just another slow-moving general technology? Are we mistaking hype for inevitability? And, can a technology this unreliable really reshape civilisation? Rory Stewart and Matt...

Transcript

EN

Thanks for listening to the rest of politics, to support the podcast, listen ...

But what I want to do is not to be a student, the master of the club, the soft indie internet, so master is really great.

What's that? What's that? What's that? What's that? What's that? Now, let's get started.

So, in the rest is AI, and this week, we're back to a simple, but pretty uncomfortable question, which is, might AI just be a normal technology.

Something that maybe really doesn't deserve great fear and frenzy. And our guest today is Arvan Narayan. He's the director of Princeton's Center for Information Technology Policy.

And he's a really interesting voice. He's a challenge to Joshua Bengeo, who you heard last week, because he thinks that it's not very likely, or that actually we're overfocusing on the end of humanity existential threat. But at the same time, he's a challenge to a lot of the people that are hyping the technology, because he's saying, in many ways, this stuff is much less reliable than we think, and even once it gets more reliable, it's going to take decades for some of these things to be adopted.

Not a few months, therefore, the change that AI is going to bring is going to be much more gradual. So, it's probably a very, very sane, thoughtful, challenging voice.

One that maybe isn't heard as much, because it doesn't necessarily suit either the end-as-world as Nye people, or the AI is going to change the universe tomorrow people, because he's suggesting that a lot of the issues are just around how humans do or do not adopt technology, here's a taster.

And to listen to the full episode, sign up at therestaspolitics.com.

There's a really interesting thing as an outsider that one observes, which is what we'll get. Elon Musk saying, "20% chance it's going to destroy humanity," Sam Altman says, "It's going to end the world, but in the meantime it's going to lead to some great companies with great machine learning." And then suddenly you'll get, you know, Gary Marcus will pop up and say, "Yeah, this is all completely overblamed. There's only a 1% chance it's going to eliminate everybody, right?" But of course, if one thinks about this, I think Joshua's point was it doesn't matter what's a 1% chance, a 1% chance of 0.1% chance.

What the hell are you guys thinking about? I mean, you're gambling, right? You're at some level, you're taking a risk that you wouldn't take with nuclear waste in your back garden. I mean, you wouldn't be reassured for me to say, "Demory, there's only a 0.1% chance that this nuclear waste is going to wipe out your family. You'd be like, "Well, how are you doing? Stop." One thing I strongly believe is that we should not be thinking about this in terms of probabilities.

At the moment, you're arguing about what the probability is. You're already down a very confusing path that can only lead to, I think, misleading guidance.

So I've looked at the most sophisticated efforts that we have for estimating these probabilities. It was led by the forecasting research institute. I actually work with these folks. I know them well. They're incredibly smart, and they did a really well thought out effort to get dozens of expert forecasters to discuss, try to change each other's minds and provide these probabilities. They put out a 753-page report. I never had that 753-page report. You could have a room with these so-called super forecasters debating, and you could have a room with a bunch of people who are high and debating what the future of AI is going to be, and you can't tell the difference.

And this was no slight to the super forecasters, who might know they're incredibly smart people, but the thing is, we have no empirical basis for predicting what these probabilities might be. The arguments that people are giving, it's things like, "Oh, AI might decide to colonize space instead of Earth." So even if we had super intelligent AI, maybe the probability is not as high as we think, or someone else thinks, "Oh, AI might decide that killing all humans will make the planet cooler, and it helps computer chips work better, and so maybe it will decide to kill all humans."

So, you know, they're listing a bunch of reasons like this, assigning some nu...

Right? So this is the best method that we have. So these probabilities are all bogus, and that's my strongly held view. We should not think in terms of probabilities. I do think the risks are potentially real. I'm not advocating for ignoring the risks, but I think the right response cannot be, let's try to stop all this. There are two big problems with that.

One, it's just not going to work. The only way it could work is if you have an authoritarian world government that can control every AI developer everywhere.

Can I pause on that up in for a second?

Yes, that's an empirical claim. I mean, at the moment, these large language models basically can only be run by some of the largest wealthiest companies on Earth. In enormous data centers with enormous compute power. So it seems plausible at the moment that if President Trump driven by Christian Nationalists and Sieging Ping wanted to simply shut down the large language models of two Chinese companies and a small handful of American companies, we would cease to have these LLMs operating pretty quickly.

This is not true at all. It might be true that absolutely the most powerful frontier models can only run on powerful GPUs, but you have, you know, slightly smaller models that are maybe one step below that can run on consumer grade hardware and the cost of running these models is dropping by something like somewhere between a factor of 10 to a factor of 100 at a year. The cost is dropping very rapidly both because the hardware is getting faster per dollar and because algorithmic improvements are allowing us to squeeze more juice out of smaller models.

Again, I suppose to push this argument to the next phase, the claim only needs to be that the existential risk is posed by the frontier models. And if one could shut down the full infrastructure that powers the frontier models, then one has less to worry about from the models that you're talking about.

Again, I completely disagree with this. I think historically, this is very easily falsified.

When open AI built GPT2, which was, you know, two generations before CHATGPT, which was when the world started noticing, they thought that model was so dangerous that they were not going to release it for people to to download and use. And that's something that my grad students can build today, you know, just for fun and learning in a day or two, right? And so historically, when we look back our ideas around what constitutes the threshold level of danger have kind of been comically off. And I don't think there is really any clear relationship between the power of a model in terms of how computationally heavy it is. And what dangerous things that might potentially be able to do in terms of enabling cyber attack in terms of various things that worry about in terms of bioresk, actually some of those models can be much much smaller and faster than these large language models, because those biological capabilities are not about language at all.

To understand this first document, this is a, the cat is out of the bag document.

It's too late. The stuff built, if these models are going to become deceptive killer robots, they're already out there and there's nothing we can do about it. It's not just that that you can't even prevent these models getting even better in the future unless, again, you have this authoritarian world government that stops even small teams of developers from acquiring consumer grade hardware and doing research to improve the efficiency of those models and their capability. I want to say, I want to go on the record as being against authoritarian world government. I mean, I'm very interested to sort of use this as a way to think about what we should do. And, you know, I'm a little bit biased because I'm one of the people in the UK who want to many people who helped set up the security institute, which does these like pretty deployment evaluations in UK government on security adjacent risks on a voluntary basis is no regulation that requires the companies to do that, although they have agreed to do that.

And, you know, my view has been to your point, I guess in agreement with what you're saying, I've been to like, it's very, very hard to know, like, you know, when people say we should pause now, we should stop now.

I get the impulse, but it might well, how do we know? And I think like governments having better knowledge of what capabilities actually are, but it can and can't do feels like a very valuable thing.

There's a lot of people that look at what we've done with that institute and said it's amazing from the government's built that capability, but it has no teeth.

But what do you think we should do? I mean, it sounds like you actually referred earlier to, you know, the idea of having like transparency requirements. Do you have a sense of like what the right regulatory framework is given both your sort of skepticism of, you know, some of these scenarios, but also, you know, you're concerned clearly about some of them as well.

For sure.

And yes, I think transparency and knowledge are the most important thing for now, but I do think there is a lot more that can be done that should be done.

One big area is developing AI for defense against these very risks that people are worried about.

And historically, this is always how it has worked. We're acting as if AI is a new thing, but AI has been gradually improving out of the public eye admittedly for, you know, decades now. And the reason the things have worked out is because of the attacker defended balance and the efforts that we've taken to shift that balance in a healthy direction.

So when it comes to cybersecurity, for instance, automated tools have been in many ways superhuman at hacking for decades now.

And yet the world has an end hit. In fact, things have gotten a lot better, and that's because companies are developing software use these very same automated tools defined and fixed bugs before they put software out there before attackers have a chance to take a crack at them.

And that's the critical thing when these AI models are now becoming even better at automatically finding vulnerabilities.

Do we have the right incentives in place to ensure that software developers as well as mom and pop businesses do might be deploying software have the know how have the funding to use this to protect their cybersecurity. Those kinds of things can benefit from government incentives and can be enormously beneficial. And so, the first disagreement you've clearly had with your sure and Jeffrey Hinton and others who were calling for a stop is your cats out of the bag.

It's too late anyway. The earlier models can do a lot of damage stopping the next frontier models not going to make much difference argument.

The only people I would make who are anxious about this is, for goodness sake, what is it about the Trump administration or the financial incentives or the psychological personalities of these people running these models that remotely convinces you that they're incentivizing anyway to really put proper safety regulations in place use the AI to counter themselves.

Every time I meet them, they are in an arms race. They're saying government back the F of government doesn't understand what we're doing. They're never going to understand what they're doing.

They don't know what they're regulating anyway. Get out of our way. Here we go. I've got to get there before and then insert the name of somebody running one of the other models who they think is unbelievably dangerous is going to destroy the world. I've got to build my model before insert Elon Larry Demis Mark, right? And if not, I've got to build it for the Chinese. So I don't know if the Chinese can be done right. None of this gives me any sense that we are in an environment in which these people either they or the US government are getting remotely behind safety.

I am not saying that we're in a good place right now with regard to regulation. Both with regard to safety regulation in the sense of these catastrophic risks as well as the kinds of safety issues we've already seen on a massive scale some of the psychological issues that we've talked about. The use of AI models for making non-consensual nudes. These are problems that we knew were happening the nudes one at least at this point for something like six years and it took forever for policy makers to start acting on these things.

The pace at which policymakers are responding is definitely inadequate. But I think at the same time the arms race narrative is a little bit oversold. I think what we're seeing in a lot of cases is that when when regulators and policymakers in one company are falling behind the harms for the most part are felt locally. For instance, you know, in the US people are realizing that with respect to the safety harms around the psychological impacts of large language models it's affecting Americans and now policy makers both at the federal and the state level are actually taking this very, very seriously.

And I think we're going to start to see a mindset shift and we're going to start to see policy makers stop buying this arms race narrative and they're going to recognize that a lot of these harms are actually going to be felt within their own countries. And the way to regulate is not just at the level of the model developers, but how they're deployed and not just on the companies such as OpenAI and Google, but also everybody else who is using these models within their own companies and try to get ahead of risks like people trying to hand off the running off their companies to large language models for instance.

One observation I have on that from both my own time and government in the UK...

And typically legislators tend to see it through a sort of, we might call like an online safety type lens and you know, suddenly when I was in number 10 here, you know, we would have these high side conversations about AI and security and then we go and see MPs who will worry about, you know, their kids or kids of that constituents talking to chatbots and I think I may have mentioned this on a previous episode, but you know, in 2025 I went to a very interesting conference of faith leaders talking about AI.

Totally rejected the security framing and really wanted to talk about, you know, what is this doing to kids and their minds and I suspect that over the course of 26 and 27, we might see

This sort of like quite executive security dominated framing kind of actually diminish as some of these cases in particular that you've referred to often kind of come to the floor and actually see legislatures, sort of be a bit noisy devices in this talking about these sort of more sort of online safety type homes. That's exactly my prediction, yes, as these models get more diffused into society where we are going to see an increase in the salience of that set of concerns and I think the good news is that a lot of the interventions, not all of them, but a lot of the interventions for both sets of concerns are similar and so I think the momentum around one kind of safety issues can also be used to address the other kind of safety issue.

If I was speaking on behalf of the commission or European leaders, they would feel they have been making a good faith attempt to try to work out how the hell you talk about these different type of risks and how you present regulation.

And the response that they have got from Silicon Valley, they feel is one of unmitigated contempt that you know, Europe stuck in the middle ages, this is grotesque regulation, this is just another sign of a kind of sclerotic aging useless economy and we will use every tool and the power of the American executive and corporations to punish Europe. For trying to regulate us and we will use every leave we can to break those regulations open to let us in and again that doesn't make me feel that this is a culture that is very sympathetic towards people trying to think about safety or security.

I agree, I think Europe's approach is when that is democratic, it's not necessarily the only right approach, I understand why the US has chosen a different approach, but I think the hostility of American companies and the government to Europe's approach has been really problematic and it's not one that I support.

There's plenty more of that agreeable disagreement to hear it sign up at therestaspolitics.com.

I'm Gordon Carrera, a national security journalist and I'm David McCloskey author and former CIA analyst and we are the hosts of the rest's classified in our latest series we're going deep inside the 2016 election to reveal the true story of whether the Russians helped Donald Trump take the White House. This is the unbelievable story of how Russian spies first hacked and then leaked emails belonging to Hillary Clinton's campaign, how Julian Assange got involved with Putin spies and how 2016 marked the point that the world changed forever.

Get the fallen cider scoop by listening to the rest of the classified, wherever you get your podcasts.

Compare and Explore