The Interface
The Interface

Is AI running modern warfare?

8d ago36:335,769 words
0:000:00

As Washington moved toward a joint US and Israeli response to Iran, a parallel fight over military access to frontier AI broke into the open. Anthropic, maker of Claude, refused a Pentagon demand for...

Transcript

EN

If you're betting on when somebody is going to die,

might make you want to kill them, and that's horrifying. By analyzing the data of our behavior, it's almost like you can peer into people's minds. He actually said, "I have no problem with fully autonomous weapons." Hello and welcome to the Interface The Show that decodes how tech is rewiring your week and your work.

I'm Nicky Wolf. I'm Thomas Germain. And I'm Karen Howe. Today, on the Interface We Will Be Looking At. Who sets the rules for AI and warfare? How exactly is AI being used in combat zones?

And is online betting predicting the Pentagon's moves before anyone else?

So guys, slow news week this week. Yes, been quite a weekend, isn't it? I have been battling a sickness, but I pulled myself out of bed to talk to you guys because honestly this story. This story is the biggest week of news, I think. Yeah, like I could not miss it.

We're doing just two stories this time to try and give them equal weight because, you know, things are a little or momentous and grave, then they usually are in terms of the stuff that we're talking about. We should note to our listeners and our viewers that we're recording on a Tuesday. So based on how fast the news is moving, when this releases on Thursday, we might not have the latest up to date information, but we are going to walk you through,

what we think is two of the biggest stories that have happened in tech in a while.

The first major story that we want to talk about is the entire,

I don't even know what you want to call it, situation, between entropic, the department of war and open AI. So the headline here, entropic basically had a massive spot with the department of war and was unable to resolve that spot over how the pentagon should be using and dropic technologies and in the middle of this spot, open AI swoops in and gets the contract instead. And we should say, entropic is an AI company. It's a competitive open AI. It's a little bit smaller,

but it's a big player on the tech scene. And we should also say that Department of War is what the Trump administration changed the name of the Department of Pents to. Right. And it's like not completely official. It's still also called the Department of Defense, but also the Department of War is a little hard to keep track of.

So basically, if I were to recount the timeline. The tough one. It is a tough one.

Back last year in July, the pentagon awarded four different contracts to entropic open AI Google and XAI to start using their technologies as part of their operations.

And thropic ended up being the first company to also be used on classified systems.

The Department of War essentially said the reason why entropic on it first was because it was the best technology when they were testing it. Then in February of this year, we learn because news breaks that Claude was used as part of the pentagon's operation to capture Maduro in its raid in January of Venezuela. And Claude is entropic say, and thropic say, I'm model. And is it well saying that we don't know exactly what is meant by

used? Like it was saying flying helicopters in, we just know it was involved in that operation. Exactly. There was not a lot of reporting in the details of how it was used. But this basically sets off a series of escalations. First behind the scenes between and thropic in the pentagon. And then in full public view as news starts breaking, where it appears that entropic is not happy with the fact that their technology was used.

And the pentagon is not happy with the fact that entropic is not happy. And so when the pentagon and entropic started having these escalating conflicts, they were actually already in the middle of trying to renegotiate terms around how the pentagon is allowed to use and thropic's technologies. At issue was that entropic had this clause in their contract that the pentagon originally agreed to saying that they had two red lines for the use of their technology.

One is their technology cannot be used for master reliance of Americans. And the second one is

it cannot be used for fully autonomous weapons without any human involvement. The pentagon refused to agree on these terms. And their justification publicly was that they

Were disagreeing on principle, that they specifically felt that the military'...

command needs to remain in the military. And it's not the business of a private company that

contracts with the Department of War to dictate those terms. And that all they would agree to is, lawfully abiding by the law, like they would just use entropic for lawful use cases. And we know this right because they like Pete Heggseth made this big announcement about how the contract negotiations were breaking down and started threatening that they were going to either declare that entropic was a supply chain risk, which would kind of like ban it from the

United States in particular ways or they were going to use this law that would like basically conscript anthropic and force them to do whatever the government wants as part of the war effort, which not normally how these contract deals go down when we're not in like World War II,

right? This is this is pretty unusual. Yeah, there's never been an American company that has been given

this designation of a national security risk. This is meant to be for companies of foreign adversaries. So this is completely like a nuclear option that Heggseth tried to pull in order to force entropics hand. But this came up because Heggseth actually called Dario Amade to the Pentagon

for a meeting. And that meeting did not go well. And in that meeting, that's what he then

lays out. Like you have a deadline by Friday. So this was Friday last week to tell us by 501 PM, not really sure why 01, that like whether you're going to cooperate. And if not, like we will do one of these two things, will either like completely eradicate you from our systems or we will a strong arm you into being part of our systems. Suddenly, like the day before the deadline and thropic issues out this statement saying, we cannot in good conscience as seed to the terms

of the Department of War is giving us the next day the deadline passes. There's clearly no deal. Trump is tweeting like really angry things on entropic. Heggseth then declares that entropic will officially be designated as national security risk. And then open AI announces via a tweet from Sam Altman that they have struck a deal in the same night with the Department of War. That's on Friday. And then on Saturday, the US government attacks Iran, right? So like crazy

timeline of events here. And we find out, I think on Sunday, the Wall Street Journal reported that

Anthropic AI was in fact used in this attack on Iran. Outwards after it was declared as a national security risk. Yes, but also current like you, you know, literally wrote the book on this. This goes back to the history of these companies, right? Because Anthropic spins off of open AI early on. And their whole thing is their worried that open AI isn't worried about safety enough. And this is kind of at least on paper. This is like

the company's thing. Anthropic is like, we're the, we're the safety company. We're not, we're not doing evil stuff. They bring themselves as good guys, essentially in the AI industry. Like, yes,

that is their brand. They position themselves as always, holier than that when it comes to open AI.

And definitely like, like, when Darryu Ahmeday, who was a former executive out opening,

like splits off to found Anthropic, one of the key reasons was that he fundamentally did not trust

Sam Altman's character. Altman essentially, you know, like, he's been now trying to defend himself and say, you know, we're trying to do the right thing here and done it up. But a lot of people have been very critical of him saying, because they have this history of a beef. And they are competitors that Altman somehow leverages this to then benefit the company in a situation where, like, that is the last thing that we should be thinking about, right? So opening AI also had a bunch

of defenses for itself. One of which was quite interesting, they said that their deal with the department of war was in fact stronger, they believe than any other deal that any company has ever struck before, including Anthropics. And the reason is because the way that the department of war used Claude was via another company, Palantir, which we've also mentioned before, is a company founded by Peter Tiel that is a defense contractor. And opening AI's defense is like, they kind of

Never fully did a direction negotiation with the department of war on contrac...

through this other platform, whereas Open AI negotiated like legally binding contract with the

department of war directly. And Altman said that they, in fact, agreed to let Open AI maintain these

two same red lines as Anthropics. And not only did they have it in the contract, they also are trying to implement it into their safety stock within these systems. So Open AI's models will refuse any types of requests from the Pentagon that are beyond these red lines. Which the public, by the way, is not buying, right? Right. The other thing that happened straight after is we should talk just briefly about the public response, which is the Claude Anthropics AI model

has rocketed to the top overtaking Cherokee PT in the app stores. People did not love that Open AI was like, yeah, I mean, and in fairness, OK, Open AI's has said that their contract with the DOD also specifies no murder bots, right? But it's just not as binding as the contract that

Anthropics wanted. And so, wow, OK, here's what it leaves it, right? Here's where it gets interesting.

So Anthropics has been publicly stating that they were specifically having a hard stance on both these things, the master villains, and the philetonist weapons, and both things they were unable to agree on. The New York Times reported something different. So the Times reported that at issue was actually just the master villains, and apparently they didn't actually have a disagreement over autonomous weapons, which is interesting. On a day also, like, he had an interview with CBS

after Anthropics was designated as a National Security Risk, and he actually said, I have no problem with, on principle, with fully autonomous weapons. The problem that he said was that he didn't feel like this technology, like, as a currently stance, is ready for that. But like, so I made like a bunch of calls this weekend to like various different people within Open AI and also people that are just, like, really smart on military and AI stuff. And one of the people that I called

was Dr. Heidi Klaff, who's a chief scientist at the AI now Institute, which, um, disclosure, I'm also a board member of, but she has been researching AI military for years. And what she said is, in this interview, Amade as essentially saying, the technology's not ready yet for fully autonomous weapons. So we are going to allow for just a decision-support systems, like we're going to allow Claude to be, like, all the way up until the final, let's release the bomb. Claude can be used,

but then, like, a human has to make that final final decision. And studies have shown that because there's a lot of automation bias, that's not real human involvement. Like, people just see what the bot says, it's, they're under time pressure and they just accept whatever it says. So if Amade said that he doesn't feel that Claude is safe for fully autonomous weapons, he also should not be saying that it's safe for decision support systems. And so to your point, Nikki, like, the

public has automatically launched on to the narrative that Anthropica's sold for a long time, that it is safer than Open AI and it is, like, the better lab, but it's actually more complicated than that.

Like, I think in both cases, Open AI and Anthropica did not act in accordance with, you know,

what we would want of companies that are this powerful. I saw someone in the internet talking

about how this is like the perfect PR win for Anthropica, for Anthropica to step in and say, no, like, we're on the side of good and we won't let you make killer robots. And it seems to have been a total, you know, PR coup for them outside of Anthropica's offices in San Francisco, people were like drawing in chalk on the sidewalk saying, God bless Anthropica, you give us courage, like, stay strong, like, keep us safe. When, like, you're saying, Karen, like, the actual thing that

they're arguing about is a lot more slippery. Yeah, yeah, he's saying it's not ready. Like, I think there's something, yeah, there's something so fantastic about the phrase,

it is not safe to be a killing machine yet. That's incredible to me.

Part of this whole breakdown here is about where the power lies and who has control. How is this power struggle shaking out? Like, has anything really changed? So this is super

Interesting question.

Another person that I called this weekend was Alandra Nelson, who was the former acting director of the White House Office of Science and Technology Policy under Biden. And what she told me is like, what they were fighting over was who ultimately gets to make these high stakes decisions in the military, irrespective of what tool is being used. And the government actually reveals how disempower they are in this instance when it comes to private companies that they've allowed

to become essential to the national security stock. And these AI models kind of have an unprecedented

characteristic about them, which is that they can be continuously updated by the company that provides them. And so traditionally, like when the Pentagon buys something like Microsoft Office, this is a static piece of software. But when they're purchasing Claude or when they're purchasing ChatGBT, that is not the case. And we don't know the fine-grained details exactly of whether or not open AI and anthropic would update their models continuously the same way, like for the Pentagon,

the same way that they would for consumers. But that is part of the crux of why I think the

Pentagon is freaking out because the military has realized that the control over their technologies

is slipping. And ultimately, like from the military's perspective, they need that control in order

to protect Americans. Okay, so I reached out to a few national security sources over the weekend to ask a little bit more details on what exactly we know about how AI gets used in more. Mostly, we are talking about data analysis. This is target triage. Target triage is like, okay, we've got a whole bunch of like potential things we could hit, tell us which one we should point the missile at. Right. There is one historical example of autonomous weapon systems.

It's unclear if this uses a kind of modern-style AI model. It was in 2020 in the Libyan Civil War.

There was a system called the STM Cargo 2, which is a Turkish-made drone explosive, like a suicide

drone system. They don't need to be linked to a commander-control human system. They described them as "fire, forget, and find capability." That's based on two paragraphs from a UN report on the Libyan Civil War. Almost nothing else is known about how it was deployed and what kind of targets it found. Most of the time, I saw said that LLM are database analysis. It is part of what is called the kill chain, which is the chain of command from the pentagon to the trigger being fired.

Parts of that are about picking targets. Parts of that are about identifying where targets are. It does not seem like in this war and around so far that a weapon system that is fully automated has been deployed. Nicky, did your sources say how this is connected to LLMs? Like is the idea that the LLM is the one that identifies the target and then in in a fully autonomous way it could be

integrated to a killer drone to then hit the target that the LLM identified?

Because these are the killer drone and the LLM are actually different types of AI, right? Certainly. Yeah. And we should say for my an LLM or listening in Indiana, maybe as an as obsessed with AI, we are LLM, large language model. It's the technology behind tools like CHNGBT. Yes. You would feed the LLM model. Say, for example, a lot of high definition satellite data and say, rather than having a team of analysts pouring over that saying that has

the markings of a disguised oil fuel dump. You would give it a set of parameters and it would, believe, believe, believe, believe these on the maps are the targets. And the the tech is largely not there yet for it to be replacing soldiers in the field. That's that's not what we're talking about at the moment. And Karen, this is your particular area of expertise, right? Part of what people are so concerned about here is that a tool like clawed these like AI, chatbot, models,

they make mistakes all the time, right? Yes. This is such an important point. There's this guy

named Paul Cher. He's the executive vice president of the Center for a New American security, CNS. And this is he's written extensively about military and AI. And he also served in the military. He had this really good piece a few years back in foreign affairs, where he said that there is

This narrative that there's an AI arms race.

AI and advances it fastest is going to win. But what that doesn't account for is that

they're also actually inheriting an extraordinary amount of risk when they acquire these technologies because of the fact that large language balls hallucinate and they're inaccurate and they do all these things. And so he had this quote that really struck out to me. AI technology poses risks, not just to those who lose the race, but also to those who win it. And Karen, on top of, you know, the concerns you're raising here about the accuracy of these tools and whether they make

mistakes. But I think part of what people are uncomfortable isn't just that are we sure that

AI is up to the task. Even if it is, there's this question of accountability, right? If a human

being makes a big mistake, we can hold them accountable. They can lose their job. We could put them in front of a military tribunal. We could ask them what went wrong. If an AI makes a mistake, whether it's targeting a missile or like driving an autonomous car, no one really is responsible. It's like, oh, well, the tool messed up, blamed the tool. The tool can't get in trouble. And we haven't as a society figured out what we are and aren't comfortable with these like thinking

machines doing in our world. 100% and we have to remember this was actually not the only issue that came up with the contracts, right? Like autonomous weapons was only half of the issue. There was also the problem of mass surveillance. Yeah. The thing much more than murder bots that my sources have already about is how this advances drag net surveillance capabilities. What this would allow them to do is sift through what is fundamentally a near infinite amount of data, right? We know that

if the US wants to get into your phone messages, your social media messages, they fundamentally can't. They're not the only country who can. But we know that they are using this capability. If you set an LLM on a bulk collection of people's messages, it will be able to find anything once much, much, much faster, much more effectively than a human set of analysts can. And it is able to spot patterns in a way that a human analyst would maybe not know to look for. One of my

sources went as far as to say that this kind of ends personal privacy. Yet Nikki, this question

of patterns you're talking about here, I think this really is the crux by analyzing the data

of our behavior. You can figure things out that we're not saying, that we're not speaking

out loud. Stuff that in some cases we've never told another human being. Human, like you feel,

everybody feels special and different and we are. But human beings are also very predictable. And when you've got information on hundreds of millions or billions of people, you can identify trends in behavior and patterns and figure things out about people that they've never revealed in any open way. It's almost like you can peer into people's minds using the power of mathematics. And like we're just at the beginning of what is possible here with this new set of tools.

Right. And the problem with that is humans are not statistics. Right. If the statistical model decides that you are something, we're looking towards a future where that defines you and that's horrifying. We have no idea what the US military or the government

writ larger governments all over the world are doing with these tools. And I think that's

in fact part of this fight that we saw playing out between the Department of Defense and anthropic is like even they aren't sure what they're going to be able to do with this stuff. But they want to have open range to take it in whatever direction that they want. And for the public, probably what's going to happen is once this has already been rolled out, once it's already being used, then post hoc, we're going to be talking about whether we're okay with these things

that are already happening. The decision we're going to point on because one of the things that Alonja Nelson told me is when it comes to what people can do like the average person in the public right now can do in this moment. It's to demand that these decisions be made democratic again. It used to be that when the Pentagon acquired a technology, there would be a public comment period.

Now we're in a state where this has been completely shuttered off from the pu...

the American public, but also the global public at large, should be demanding better.

So switching gears slightly, but I think this is all kind of tied together in a bunch of

weird, interesting ways. As this was happening last week during Trump's state of the union address, he raised the possibility that the U.S. government was about to attack Iran. And on Saturday, that's exactly what happened. And on this weird part of the internet, this became a huge issue in the world of online gambling. So people have probably heard by now about these companies, poly market and call she, right, they call them online prediction markets. And essentially,

these are platforms where you can bet on anything you can imagine. And when the U.S. attacked

Iran, 529 million dollars were wrapped up in bets on the timing of this strike on a foreign

country, like this huge military action, hundreds of millions of dollars changing hands. And a bid aye,

I know Nikki, you've been following this up and dying to ask you. Yeah, the thing that poly market

and calcium, the differentiates them from quote unquote gambling, is that there's no house, you pay fee and it was to use the platform, but you can lay a bet on anything. And if somebody takes the other side of that bet, it is considered that that's a private wager between two people. And then the way that becomes a prediction market is they're suggesting that the wisdom of crowds means that

that will predict, say, the winners of elections. Now, when you're betting on as people were

things like the removal of, well, leaders, you wind up in some pretty uncomfortable territory, pretty quickly. If you're betting on when somebody is going to die, that might make you want to kill them. Right, it doesn't take a huge leap of the imagination. And there's also betting on almost anything, right? I mean, talking you with a subject of some of these bets recently. This is the weirdest thing that happened. So I tweeted about one of our stories a couple of weeks ago

and the tweet picked up a lot of steam. It went super viral. I was getting all these notifications, you know, trying to ignore them. But I opened my phone and I saw in the comments of my post, there was a picture of a chart. And I looked at it and what had happened was there's this system.

I think it's called "tweem" where you can bet on how many views a tweet was going to get. So

people were gambling on my tweet. This isn't polymarket or calls, so it's like a different thing. But at this point, the entire world has become a casino. I was reading the other day about this platform where people are betting on traffic, where there's like, they find like close circuit cameras that are like pointed at an intersection and they place bets on how many cars are going to pass through before the light turns red that like every moment of our lives, every issue is now

open in this weird, you know, regulatory gray area for a whole new era, a whole new world of gambling and betting. That becomes a huge problem because legally speaking, there's a lot of laws covering casinos what they can open a book on, you know, sports betting, sports betting has some huge problems. But when you're betting with a gambling company, there are laws that govern how they can operate that market. When you're betting with other people, even though it's using one of these

massive platforms, the law is in a much greater area. Saying you laid a bet on your tweet, hitting a certain number of impressions, right? And then you locked your tweet when you hit that number, ensuring that you won that bet. It is unclear to me that you would have broken any laws, right? People are betting on actions, but it's also then influencing people's actions in the real world. Like, do we know to what extent this is happening? Karen, you raise a really

interesting question here because one of the biggest scandal surrounding this issue is the question of insider trading. So one of the things that's happening here that doesn't happen with other forms of gambling is these companies are actively encouraging people who have insider information to make bets. So in sports gambling, it's not allowed for you to, for example, like take a fall and throw the fight because you've got money on the game. But this is happening on these platforms

on a massive scale. In fact, polymarket and calls you have been striking all these deals with

News organizations, the Wall Street Journal struck a deal with polymarket to ...

include, include like the wagers and odds on these bets into its reporting about whether or not something is likely to happen. CNN now has a deal with callshy and like the line from these companies

is, I think that the CEO of polymarket said, we're going to help you figure out the news before

its news. And the, the sort of one of the first made a great point was like, if it hasn't happened yet,

it isn't news. That isn't how this works. And effectively, some big outlets are becoming marketing channels for these gambling companies. And because the companies say like, this is good, like, we're a new kind of news, they, they actually say insider trading is a good thing because it helps the public learn information faster. And if people get ripped off along the way, apparently from polymarkets perspective, that's fine because we get to learn something quicker and someone gets

to make a ton of money. And Nikki, I know that like this doesn't seem to be a hypothetical issue, right? Here's the totally wild thing. Friday night Saturday morning when the offensive in Iran was launched, there were six accounts on polymarket. But between them, they laid bets in the two

or three hours before this offensive was launched. They made $1.2 million, those six accounts alone.

And it is pretty clear science that there were as people making bets with insider information

that the war in Iran was about to be launched. Are they talking about people inside the pentagon?

This would be people either inside the pentagon or inside the White House. And that's because the timing is, I mean, either that or six separate accounts or through sheer luck within a couple hours bet that that was going to be a war. And it's not just Iran, right? We saw some more things with material and Venezuela. Yeah. We're in Venezuela right there. We're bets like just before that happened. But there's also this whole, there's a real darkness here if you think about it.

Because people are betting on human life, right? We're making bets on whether this attack will happen. It's a bet on whether people are going to be killed. It's worth saying on that. The calcium, one of these two big ones, I'm like Polymarket, how she does say that it voids bets that are based on or tied directly to some of these deaths and calcium. I directly to deaths, but they also said that war is okay, right? So you can bet on war. Wait, you can bet on war that's fine, but you can't,

if it's like too close to like a particular person dying, like are we going to strike this building or this school than that? Oh, that's too far. That's not okay. So calcium, calcium voided the bets. Um, after Knemene was killed. Polymarket did not. Polymarket paid out on those bets or allowed those bets, the facilitation of the payouts on those bets. And it's also worth saying that there've been some analysis that, you know, 18 to 20 year olds on a large scale are betting on these platforms.

People who would normally are too young wouldn't be legally allowed to gamble. Like it's this another, it's another example of like an internet company stepping in to a place where like you know, is ever really thought about this before and the law isn't clear. And then for a while, they just get to do whatever they want and will this go on so like long enough that it just becomes the norm and regulators are like, oh, well, you know, you can't stop progress. This is just the way

of the land. Now, we're not going to shut down these billion dollar businesses. Now, his, his where

I think this goes, though, there's always people on the other side of these bets, right? Each of

these bets has some of them that has some of them inside information on it, someone is being had with that. People are going to start to not love being got by these sorts of things, right? And so even if this doesn't end up in a state regulatory thing, I think we're going to start seeing legal problems for these platforms privately, if not publicly. So we're saying that there are at least 20 federal lawsuits in the US already, where it is being suggested that these companies are

violating the commodity futures trading. Right. So this is not coming under gambling that it is coming under insider trading in the Wall Street sense is, is how the regulating structures seem to be looking at it coming. And any one of those lawsuits could demolish any of these companies, right? I think what's really going on here is across all of these conversations we're having. It's a clear sign that we've entered a new era, right? There's a whole new set of questions, ethical,

Moral, legal questions about what's going on with the world of technology.

how things are going to be used like in the immediate future. Let alone how all this stuff is

going to be regulated. So we're marching forward and I guess that's what we're here to do

is to try and keep up with and make sense of it all. But it's it's looking like a pretty tall order

from where we're sitting today. Right. It's very clear that both the technology and the

cultures surrounding the technology are moving way faster than the law and even the way we can

understand it can keep up with. Buckle in, I guess. Buckle in. Yeah.

Join us next week. If you're in the UK, you can listen on BBC sounds. If you're outside the UK, you can listen wherever good podcasts are distributed or search for the interface podcast on YouTube.

And if you want to get in touch with us, you can email us at [email protected]. We do read all of your

messages or you can watch our books on plus four four triple three two zero seven twenty four seventy two or find us on social media. Thanks are in the show notes.

Compare and Explore