(upbeat music)
- Hey, I'm Floor Lichtman and you're listening to Science Friday.
“The military use of AI is capturing headlines this month.”
The dust up at the Pentagon with AI Company and Thropic Out and Open AI in, meanwhile a new war where AI is in use, what do we make of all that? We knew just to to call Karen how is a journalist covering AI,
she's written for the Atlantic and the Wall Street Journal, and she's the author of the book, Empire of AI Dreams and Nightmares in Sam Altman's Open AI, which takes you behind the scenes on the rise
of one of the most powerful startups ever.
Karen, thank you so much for being here. - Thank you so much for having me. - You are deep in this world. I have this feeling in this moment of AI's power and reach sort of snowballing,
but what is your impression of this time? - Oh my gosh, yeah, I mean, when I was working on my book and using the metaphor of Empire to try and contextualize
“the sheer power consolidation that's happened”
within these companies, I was not envisioning the fusion of this technology with the military and the alliance between Silicon Valley and Washington, and it just it feels startling that that metaphor has come to be the only metaphor that we can really use
to understand this moment. - Well, it's not a metaphor anymore, it's literal. - It's literal, yeah, yeah, yeah, that's right. And I did not anticipate that happening. - Let's talk about the war in Iran.
Do we know how AI is being used? - There has been reporting from the Wall Street Journal in the Washington Post that says that whenthropics and model cloud was essentially used to analyze a bunch of intelligent data
and then identify targets to bomb. And the Washington Post specifically said that there were around 1,000 targets that it identified. And one of the things that's deeply disturbing about this is that large language models,
which is the technology behind cloud, is a very faulty technology.
“It is not accurate, that's why sometimes”
when you are chatting with chat, you're chatting with cloud, when you try to get it to talk in more detail about something that you have expertise in, it starts to even make up that it knows those things.
And in the military context, like that doesn't go away. And so we had news reports about this horrific bombing that then happened to a school in Iran. And not just one bombing, but two.
So when the first responders and parents rushed
to the site to try and save anyone that was still alive, they got bombed too. And there is speculation that it's because cloud misidentified a civilian target as a military target. - Right, though just to clarify,
it's unclear if AI was to blame in the strike. And on Wednesday, US officials said that it was unlikely, according to New York Times reporting. So we don't know. - We do not know, right.
But it is such a legitimate possibility that it's like perfectly encapsulates what is going on right now, that there's just so much uncertainty and there's so little transparency,
so little accountability in just extraordinarily grotesque actions that are happening and mass life death decisions that are being made under the veil of secrecy. It's just like, yeah, it's just really awful. - Isn't this one of the sticking points
between anthropic and the Pentagon that they said, "Claude isn't ready for..." And this is different, right? This is autonomous weapon use. So this is not just identifying targets.
But also the Pentagon seems to be using Claude, but also blacklisting Claude, like I'm confused. - Yeah, there's so much going on.
So anthropic was the first company
that got permission from the Pentagon to be used on classified intelligence systems. And so for the last, maybe around nine months, that has been true. And the Pentagon then became,
it seems, quite reliant on using Claude, because then when anthropic and the Pentagon started fighting over the fine-grained details of how exactly Claude should be used, the Pentagon then did like the nuclear option
Was like, "We are going to either force you
to bend to our wishes, or we are going to declare you
a supply chain risk."
“But after they declared anthropic a supply chain risk,”
they're still reliant on the technology. So there's a six-month phase out period. And like hours after they declared that supposedly and dropped because of threat to national security, they used the very tool that is bad
for our national security in the bombing of terrain. And so yeah, that's like one layer of what's happening. But the other thing is, people have been loading inthropic a lot for standing their ground. And there's like a deeply complicated aspect
to anthropic's role in this whole thing. So Daria Amade, the CEO of Anthropic, said that he did not want this current iteration of Claude to be used for autonomous weapons. But in a CBS interview, he said
“he was perfectly fine on principle with autonomous weapons.”
It was just not like this version of the technology. And in fact, he had offered to co-create co-develop autonomous weapons with future iterations of the technology. That's like one thing that complicates this whole thing.
The second thing is like, I was seeking with Dr. Heidi Klaff,
who is Chief Scientist at AI Now, this Policy Research Institute in New York, and she's been writing extensively about military and AI. And she mentioned, like, what Amade was saying is he's not OK at the moment with the current iteration of Claude being just like no one popping in and checking, OK, like what targets
happen identified. But he was OK with Claude actually being a decision-support system. And so he was OK with it actually analyzing the data to identify bomb targets. And so the Pentagon is actually using Claude exactly
in the way that Amade said he was fine with in the current iteration. And Dr. Heidi Klaff was like, if you think that your technology is not good for autonomous weapons, it should also not be used for decision support systems. Because we have extensive research that has shown time and time
again, that there's a huge automation bias with humans. Like, when we see a chatbot or a robot do something or say something, we just believe it. And so even if you have a human that's popping their head in and being like, OK, have you identified the right bomb targets, they're like, well,
the bot, I mean, the bot is a computer and has analyzed all this data. So it must be right. Like, it's not a legitimate check. And so what is happening where people are speculating that the school was bombed indeed because of an error from Claude is exactly exactly
the kind of scenario that Dr. Heidi Klaff was talking about. And exactly the kind of scenario that Amade was actually OK with. Right.
“So this moral high ground for anthropic feels also a little suspect, right?”
Like, is that what we're all beginning with? Yeah. And the thing, like, I think the way of thinking about anthropic in the air world, it's just like the clean coal of AI. Like, they fashion themselves as this ethical company that it really
cares about safety and the well-being of people and so on and so forth. But like, the entire way that they develop and deploy their technologies is deeply problematic and very imperial. And so like, the clean coal, it doesn't exist, right? Like, you cannot have clean coal.
OK, this is such a basic question. But the news that I've been reading has been describing these as LLM powered weapons, large language model power weapons. And it makes me think like, do I not understand what a large language model is? Yeah, I mean, I personally would not use the phrase that phrase because it makes
it sound like there's like a chatbot strapped onto a missile. And that's not quite what's happening here. Like, I'm sort of piecing it together from what's been reported by other publications. But what we understand at the high level is that the chatbots or the large language models are being used to analyze information to identify the bomb targets.
And then there is a missile that has launched to target those places. And it's not like one continuous sequence. Like, it is people that then receive this list of identified targets and then do
what they have always done, which is then like launch the weapons.
But yeah, like it almost feels continuous because of what we were talking about because like, is that person really even actually adding their own judgment in?
Right.
And when we talk about autonomous weapons, like, that's more like a brain, right?
“Like, because at that point are we talking about a continuous sequence or like, even if”
it's a string of tools together, that amounts to all the judgment being outsourced. Yeah. So, fully autonomous weapons is it would be if you know, cloud identifies the targets. And then without anyone there, it's automatically fed to like the missile launching system and then the missiles are launched.
Or it could refer to, you know, drones with AI capabilities, attach them that like, go identify the target itself through the computer vision system and then drop bombs in that area.
So basically, like autonomous is defined as, specifically, like, there's a kill chain sequence.
And it's, it's the last two stages deciding and the launching, like, if there is no human involved and it's just the machine that's doing these, that is what's considered a fully autonomous weapon. Right. So there are anthropic was like, um, a day was like, we're not quite ready for that.
We're not, yeah, exactly. We're like, we're not quite ready for both steps, but we would like to get there. And we are, okay, as long as there's a person that's looking while both steps are happening. Right. Let's look ahead.
What will you be watching for?
I'm going to talk about what I'm watching for, not with the companies because the thing that makes me optimistic in a deeply dire time is the amount of resistance that started bubbling up among the public. So this is the thing I'm most excited about watching for is that, you know, in recent polls, 80% of Americans now believe that there needs to be some form of regulation on the
AI industry.
“I don't remember the last time that 80% of Americans were on the same side of one”
issue. Like, I'm very optimistic about the fact that there is now a broad coalition building to hold this industry accountable because we need that more than ever. And we are ready seeing this happening with some aspects of the AI industry, like the reckless status in our expansion, that the industry has been engaged in where so many communities
across the US are discovering that there is a data center popping up in the community that was struck as a deal under NDA with their city council. And they are literally physically going into the streets to protest these facilities. They're going to town halls to pressure their elected leaders. They're actually voting out their officials that are not adequately reflecting the will
of the people in the situation.
“And this has become a very effective grassroots movement to check a key, what I call pillar”
of the Empire's expansion. Like, if these companies do not get their data centers at the clip that they need to, they have to slow down their technology development because it is already a key bottleneck in their advance in and it would become an even greater throttle to their advance in. And I would love to see more people around the US thinking, and also around the world thinking
about how to take the lessons from this grassroots movement, pushing back on data centers to then push back on other aspects of the AI supply chain, whether it's the reckless deployment in the military, or the psychological harm to kids, or the mass copyright inference that are happening. And we are beginning to see more and more of that across the board.
Karen Howe is a journalist covering AI and also the co-host of the new BBC Tech podcast, The Interface. Karen, thank you so much for taking the time. Thank you so much for having me. This episode was produced by D. Peter Schmitt. Thank you for listening.
I'm Floor Lichtman. [BLANK_AUDIO]


