The Daily
The Daily

Anthropic vs. the Pentagon: Inside the Battle Over A.I. Warfare

4d ago28:245,075 words
0:000:00

In recent weeks, the Defense Department has tussled with Anthropic over how its artificial intelligence could be used on classified systems. That fight became bitter and negotiations fell apart. And w...

Transcript

EN

I'm opening up Crossplay, I've been playing against Dan, my colleague at the ...

Cats play it another move. Oh, she played Stu for 36 points. I've got a "Z" which is 10 points.

I'm guessing Tengah is not a word, let's see.

Tengah is a word. Oh, Dan played his last turn, let's see who won. It's so close, but I did win.

Crossplay, the first two-player word game from New York Times games.

Download it for free today. It's devastating when you see a game that you could have won. From the New York Times, I'm Natalie Ketroff. This is the Daily. As the U.S. bombardment of Iran has escalated,

it's become increasingly clear just how much the U.S. military has been relying on sophisticated artificial intelligence. And that's made the defense department's bitter fight with the AI giant anthropic over who controls that technology. One of the most high-stake strategic battles of our time. Today, my colleague Shira Frankle, on the standoff between the Trump administration and anthropic, and what it really reveals about the future of warfare.

It's Monday, March 9th. Shira, it's wonderful to have you back on the Daily. Thank you for having me. So, as this war in the Middle East has progressed, we've been hearing more and more about the U.S. using AI in its attacks on Iran.

It's one of the first times really where this technology is very clearly having a practical application for the U.S. military.

We are seeing it in action. And at the same time, in the background, there has been this ongoing bubbling battle over the use of that technology. So, we're going to get into the specifics of all of that.

But first, can you just lay out what this fight is fundamentally of?

Well, this fight is so much bigger than one company and this particular moment with the Pentagon. It's really about the future of warfare and the role that AI is going to play in war. Right now, in the Middle East, as the U.S. looks for targets to strike, it is using anthropic technology to analyze intelligence, analyze satellite imagery, and figure out where it wants to hit.

AI can analyze data for the military faster than a human being possibly could. It's proving it's worthiness every single day. And so, in a sense, these private technology companies based in Silicon Valley and the Pentagon need each other more than ever. But there's a question about how they're going to work together going forward. We all hurdle towards this vision of robot wars of AI, backed weapons, fighting AI, backed weapons.

They're trying to figure out who gets to say what's safe and what's not. So, on one side, you have these private Silicon Valley companies. You have anthropic, which is the first AI company that was authorized to work on classified U.S. military systems. You have Open AI, which is this behemoth of AI companies. You have long standing companies like Google and Microsoft, which have AI divisions. So, you really have a number of very powerful companies in the valley that want to do business with the Pentagon and are,

in some cases, doing some business with the Pentagon, figuring out how to navigate that relationship. And on the other side, you have the Pentagon, which is thinking about this global AI arms race against China, Iran, and Russia, and how America is going to fare in that. And just to get a lay of the land here, can you just explain how the Pentagon is broadly making use of this technology? What function it plays?

So, right now AI plays a huge role in what's called SIGIN, just signals intelligence. What I mean by that is that the military at any given time is ingesting an incredible amount of data. Text messages, postings on social media pages, phone calls, all of this is intelligence that's gathered by the military,

and then used to make critical decisions.

Now, in the past, there was a room full of human beings that would have to sit there and analyze all this intelligence. But now we have AI, and this is exactly what AI is really good at.

It ingest data, and then it tells you, here's an important note you should take out of this.

Here's my summary. Here's one phone call that's better than all the other phone calls that you should actually be listening to. And so, this is critically important right now in the Middle East, where we're seeing this AI technology being used. But spinning forward, it's only going to become more important as AI gets better and better, and the military wants to integrate it into more parts of its weapons arsenal. Okay, so, a hugely important debate happening at a very important time.

Just orient us here. How did this whole fight start? It actually starts in this very positive, optimistic way in that the Pentagon issues a call out last year, saying it wants to introduce AI. It invites all these AI companies to basically come into the military and show them how they can be helpful.

How can the Pentagon, the Department of Defense, start integrating AI into it...

And they immediately get a lot of takers. You've got Silicon Valley's biggest AI companies, Google, XAI, and Thropic, and OpenAI, all raise their hands and say, "We want to participate. We want to work with the Pentagon." And of all the AI companies that begin working with the Pentagon, and Thropic emerges as kind of the best and the most seamlessly integrated into the Pentagon systems.

It's working with Hellendure, this data analytics company. It's one of the only ones that is approved to work on classified systems. And so, people across the DOD tell us that it really quickly became absolutely fundamental to their work and made their lives easier. Okay, so I just want to pause here because from what I know of Anthropic, this is a company that brands itself as the socially responsible AI company,

the company that emphasizes AI safety a lot.

And so, it's just kind of interesting to me to hear that they were the first ones to be so embedded within the US military.

Let's true. This is a company that was founded by people who left OpenAI because they wanted to safer AI company. They said they wanted more safeguards. I mean, this is their entire premise and how they draw employees to work there. What they also are, however, is a company that really believes in working with the government. We've seen their top executive say that they think AI can make our country safer.

It can help the US military defend against adversaries. They are by all accounts deeply patriotic as well. And so, while the two things don't seem to naturally go hand in hand,

I think in the minds of their chief executives, at least from people that are sitting in the room with them,

they say, yes, they wanted to work with the government and they thought they could be the ones to do it safely. Okay, so that explains why, at this point in the story, all sides are working well together. When do things start to change? Things start to change on January 9th.

When the Secretary of Defense Pete Hexeth comes out with this pretty big memo. And he tells the military, he tells everyone across the Silicon Valley that things are about to change.

AI is critical for the future of warfare.

China's developing AI weapons, Russia's developing AI weapons. If the US wants to be competitive, AI has to be at the center of everything. From autonomous weapons like drones or fighter jets that have no pilots to data systems. And this kicks off a need for new contracts with all the AI companies. And they do what companies do.

Their lawyers start sending contracts back and forth with the Pentagon's lawyers trying to figure out how they can come to some sort of new agreement about this. And how does that go? They have differences, they have things that they're trying to figure out, but it's all sort of happening quietly behind the scenes. When all of a sudden something happens that ends up escalating tensions between the

and the Pentagon. News reports emerge that Anthropics' claw technology was used as part of the capture of Nicholas Madorah that as well as leader. Right, I remember when that came out. It was this surprising moment to find out that an AI model was used to do something like that.

Like this very on the ground operation that involved boots on the ground and lots of planning. AI was in the middle of it.

Yeah, I mean, I think it was even surprising, confusing for people who work at Anthropics.

Who did not know if their technology was used in the mature raid. It even came up in a meeting that happened between one employee at Anthropics and another employee at Palantir. The Anthropic guy asked, "Do you know anything about this?" You know, his art technology being used. It was not something that they appeared aware of.

But whether or not Anthropics' technology was used at the Pentagon, the fact that a private Silicon Valley company would even be raising questions about this was seen as inappropriate. You had the Secretary of Defense head-self telling people around him that he didn't like Anthropic, even asking questions about how their technology was being used.

and in the midst of all these kind of sensitive negotiations happening about the future of anthropic and the Pentagon, this was kind of the kind of the kindling that they didn't need.

So basically the defense department sees this inquiry by the employee at anthropic as a sign that

the company is challenging the military's use of the technology. Yeah, exactly. They see it as a sign that this private company that's talked a lot about safety is going to try and impose its own rules, its own guard rails, its own ideas of safety onto the Pentagon. And in the midst of all these sensitive negotiations, it suddenly becomes a crisis. It suddenly spills over from emails back and forth between lawyers to big public statements

by senior figures at the Pentagon. And what is the crux of the crisis itself?

The crux of the crisis is over and throbic wanting to define safety and wanting to limit two specific ways in which the Pentagon can use their technology. They want it codified into their contract with the Pentagon that their technology will not be used for the mass surveillance of

Americans and it will not be used for autonomous weapons.

on these uses of AI? Like what's the rationale here? Well, they're worried about a few different

things here. First and foremost, they're not sure that AI is ready. AI might have a 1% or 2%

error rate, but when it comes to something like picking a target with a missile, that kind of error rate can mean life or death. Huge consequences. Huge. Now, imagine, secondly, the pure disaster. If a news story comes out that anthropic's AI was used to hit a target that ended up being wrong. Suddenly, this company has an absolute pure nightmare on their hands. We're Americans are contending with this very real-life use case where AI or in science fiction books, they would say

the robot, you know, it chose the wrong target and humans were killed. And, you know, thirdly, they've got to worry about their own employees. People who work there are not comfortable with working

with the military. People who work there are worried about the use of AI and war. They really

risk alienating a lot of the people that they paid a lot of money to come work at that company. Right. It's worth saying that these employees are very valuable, right? There's a total talent

war on to attract these people and you don't want to risk losing them. Yeah, that's right. There's

some of the most highly sought after engineers across Silicon Valley and that's saying a lot. We're talking about contracts, potentially, with tens of millions of dollars to acquire some of these people. Got it. So, it sounds like there is a broad set of reasons why anthropic is not wanting to do this. What about the Pentagon? What do they make of this? The Pentagon is mad. They're sitting there and saying, hey, you are a private company, you do not get to make these calls.

Whoever decides that AI is ready to control a weapon should be sitting here. In the Pentagon,

in the military, we are the ones that make these calls and really how dare you as their view as a private company try to tell us how to build our weapon systems. They're saying it's not your role. It's our role. That's our job. Exactly. And the Pentagon is saying we are going to implement all lawful uses of this technology. So, they're making the argument that anthropic is really asking for something that isn't necessary. So, things escalate and escalate and they result in this

meeting between the Secretary of Defense, Pete Hexeth, and the Chief Executive of Anthropic, Dario Amuday. The CEO of one of the biggest AI companies in the world is meeting with the Defense Secretary Pete Hexeth today is the Pentagon threatens to essentially blacklist that company Anthropic from lucrative government contracts if it's civil for the most part until the very end. Defense Secretary Pete Hexeth gave CEO Dario Amuday until the end of the week to sign a document

ensuring the military would have full access to the company's AI model. The Secretary tells Dario Amuday, "Hey, you have until Friday 5 PM Eastern Time to compromise, work it out, figure it out, but we are giving you a hard deadline or we're going to take some type of action against you." And what is the action? What's the threat? So, there's actually two threats made against Anthropic and they're pretty opposed to one another. One is that Anthropic will be labeled a

supply chain risk and this is a designation that America has used in the past mostly for foreign companies who produce something abroad and which America feels is not safe for national security reasons for the government to be buying. So they would be essentially saying, "Hey, Anthropic, we think you're dangerous as a company for national security and nobody in the government can use you." The other threat would see them invoke this defense production act which labels a company so

necessary to national security that they have to work with the federal government. These seem like pretty extreme threats. I mean, the government is saying, "We're either going to force Anthropic to comply or inflict a ton of pain on this company by punishing anybody else that does business with them, essentially." Yeah, I mean, they are extreme and it leads to this for her moment of solidarity across Silicon Valley. These companies who usually, I mean,

quite honestly hate each other, suddenly come together and they say, "We stand behind Anthropic,

the AI community stands behind Anthropic and their red lines." And I think of all the voices that emerged, the most interesting is Sam Altman, who's the chief executive of Open AI. He historically has not gone along with Anthropic. These are a bunch of guys that left his company and said his company wasn't safe and started their own company. There is no love loss between the leadership at Open AI and the leadership at Anthropic. And he even stands up and he says, "No,

no, I back them, I back in Thropic." And here we should just disclose for transparency that the New York Times is currently suing Open AI over the use of its models. That's right. So all of Friday, tension is building. People are tweeting in support of Anthropic. They're telling the company to hold the red lines and Thropic's executives. Their lawyers are on the phone. I mean, minutes, minutes before the deadline hits. They're still on the phone with the Pentagon trying to figure

This all out.

Now to make a development in the clash between the U.S. Department of Defense and Anthropic,

President Trump has ordered the federal government to stop using its technology after the AI refused to let go. One is that the DOD announces there is no deal. Defense Secretary Pete Hegsev says he will designate Anthropic a supply chain risk to national security. Anthropic is a supply chain risk. It's going to be booted banned from the entire federal government. Saying any contractor that does business with the U.S. military will not be allowed

to conduct commercial activity with Anthropic. President Trump called Anthropic a radical left woke company, which will not dictate how the United States fights and wins wars. Thank you. And then they issue another surprise. They actually have an ace in their back pocket.

Anthropic's relationship appears to have ended, but OpenAI is ready to make a deal.

This whole time in the background, they've been quietly negotiating directly with Sam Oppen, the chief executive of OpenAI. Wow. And this whole time, he's been negotiating himself directly with the Pentagon. And Sam Oppen says that he got exactly the deal that Anthropic wanted, but he had actually decided to take a very different approach to the entire negotiation. We'll be right back.

I'm Kevin Rus. I'm Casey Newton. And we're the host of Hardfork, a show from the New York Times about technology and the future. That's right, Kevin. Each week, we come to you from the front lines of tech, giving you interviews with big newsmakers, doing hands-on

experiments, and talking about the week that was. We're out here in San Francisco, where the

cars drive themselves and the code writes itself. And we are here to tell you about the future that is coming to wherever you are very soon. That's right. At least until the podcast starts recording itself, at which point you and I are out of love Rus. We think that every Friday

for about an hour, you should have a good time. Coming out with your Paris social friends,

Casey and Kevin, and you might learn something. You'll hear a great conversation. And you'll be able to sound smart when you head into your workplace meeting on Monday morning. You can listen to Hardfork wherever you get your podcasts or watch us on YouTube at youtube.com/hardfork. OK, Shira, you said that Sam Altman took a much different tack with the Pentagon in these negotiations. What do you mean by that? So, anthropic had been asking this entire time for

certain things to be codified into their contract. They wanted to establish that their technology could not be used in these very specific ways that were important to the company. What Sam Altman did was say, hey, we don't need that type of language into the contract. What we're going to do is write our own guardrails, our own safety measures into the code itself. Engineers call this writing into the stacks and it's something that AI companies do all the time.

They update their safety measures. They quote, write into the stacks,

guardrails that they think are important. And so he's saying, it's not on you. It's on us.

Whatever's important to us, whatever safety measures we have as open AI, we are going to make sure are there. And just explain why that version of things, where the company is in control of writing these safeguards into the models. Why that wasn't good enough for anthropic? People who work at anthropic make the argument that when you write something into the stacks, it can be unwritten. You can write something else

the next day. It is not permanent. These stacks get changed daily. They could even be changed hourly. And in their view, there was not enough to stop the Pentagon from saying, okay, well, you wrote that into the stacks today, but tomorrow we're telling you to do something else. Essentially, you're saying their fear is that this kind of guardrail is much more movable. It's not permanent enough. It doesn't guarantee that the limits will be respected long-term.

Exactly. So the Pentagon came out of this winning, it sounds like. I mean, I think that from

their point of view from the DOD folks we've talked to, they are happy they got open AI on board. I think that where the Pentagon may run into problems long-term is the broader AI community in Silicon Valley. And how this is really brought to the forefront, this bigger question of AI and weapons. AI in the government is AI going to be dangerous and is the government thinking about it in a responsible way. I think that whole debate is now in the public consciousness. Right, and I have

to imagine that the extent to which this administration was willing to really throw the book at this American AI company, but has to have had something of a chilling effect in the industry, right?

Oh, definitely.

If they can threaten to label and throw up at a supply chain risk or to use this defense

production act against them, what's to stop them from doing it to any tech company in Silicon

Valley if they don't get their way? And so there's, there's been this moment of trust building between Silicon Valley and the Pentagon that's happened slowly over the Trump administration, and we've really seen a lot of that shattered in the last week or so. And what about the companies at the center of this, Shira? Like, how do they net out? Because obviously, Open AI has this victory in terms of getting the contract. But at the same time, it's hard to ignore the PR benefits that

have come out of this philanthropic. This company was very popular among software engineer types, but before all this, it was by no means well-known among the general public. And now, all of a sudden, anthropic is this topic of national conversation. Right. I mean, we saw that in the immediate

aftermath of all this, anthropics, claw technology shoots to the top of the app store for the first time

in the company's history. They have not just become a household name, but they've become a household name that's synonymous with security. Right. Safe AI. And that's a huge PR win in a moment where so many people are still afraid of AI. Right. You're saying it's not just that people are talking about the company. It's that they're talking about it as a company that values safety and responsibility. And you can see why that might be appealing. That's right. Out here in Silicon Valley,

I think anthropic is really emerging as a winner in terms of the PR battle for the hearts and

minds of engineers. And right now, anthropic is really being seen as an ethical company that stood by its guns and did what it said it was going to do in terms of safety measures. And here in Silicon Valley engineers are talking about how they want to go work for them. And so that could net out really as a big win for anthropic. After Altman signed the deal, there was a lot of blowback across Silicon Valley for the terms that he had reached for the Pentagon. I actually

saw people in the streets of San Francisco holding up a signed saying anthropic stands strong. Wow. And you see online people who work at these companies voicing both support for anthropic and dismay with open AI. And that pushback from engineers has complicated things for Sam Altman. He's had to meet with his own employees more than once to assure them that he's going to see a safe contract with the Pentagon. And he's had to do a lot of kind of internal PR work among

people at his company to try to do damage control. It sounds like with his own employees. Exactly. And we've seen him announce subsequently that he may have made a mistake rushing too quickly into a deal with the Pentagon and that he's actually sought new language now around the master valence of Americans and other assurances so that his employees will not be as upset as they have been in the last few days about this contract with the Pentagon. So where this

stands now is that you have two of Silicon Valley's largest companies. Basically battling it out

over what safe AI looks like. On one hand, you have Sam Altman, open AI, and his version of working with the Pentagon. And on the other, you have Dario, Amiday, and Anthropic, sort of saying this is how we think safe AI should play out. And Chira, through all this, it's clear that both companies are trying to win the optics battle in all of this. Both are claiming the mantle of

safety asserting or reassuring people their own employees that that's what they care about.

But I just want to push on what they actually mean by that, by safety. Because when we were talking earlier about the red lines, Anthropic insisting that its model shouldn't be used for master valence or autonomous weapons. They were saying their models just aren't ready yet. They're still error prone. And so it sounds like they're arguing it's not safe to use their model in those ways now. But do you think these companies are opposed to those models being used for master valence

for autonomous weapons ever? No, I think ultimately these companies are well aware that the way the

world is headed is that AI is going to be at the center of pretty much everything the government does from surveillance to weapon systems, AI is going to play a role. Right. You also have to remember these companies are really competitive. They're technologists who love what they do. They love the future of AI. And so there's also sort of a personal vested interest in making the AI good enough to play this really central role across the government. Right. I mean, and there's billions

at stake. We should say in this industry being invested, these companies are locked into competition with each other and there's no going back, is what you're saying. There is no going back. When you speak to some of these technologists, they describe what the world looks like in the future. And

Honestly, depending how much sci-fi you've read in your life, that is a very ...

really scary vision of the future. So they look forward and they imagine a war in which there's no

human soul drawn in the battlefield. We're back in Washington or wherever on some military base, there's a guy with a headset who's controlling a fleet of drones or submarines or fidelist jets and they're fighting against another nation state which has very much the same. The surveillance of all these targets is happening through AI systems that can comb through imagery faster than the human brain can process a single photograph and all these decisions are happening at lightning

speeds. That's what they see all of us kind of hurdling towards. What you're saying is this fight

that we've been describing between anthropic and the Pentagon and open AI, it didn't actually force stall the future. In some ways, it just made clear to everyone that it's coming. That's right. They are all clear that it's inevitable and what all these companies agree on, what the Pentagon agrees on, is that they're all active partners in making this a reality. Shira, thank you so much. Thank you for having me.

We'll be right back. It's your headline to unpack. It's your one story to follow week by week. It's your world of work through. It's your team to track. It's your 36 hours to explore. It's your marinade to master. It's your opinion to figure out. It's your mattress to upgrade.

It's your day to know what else you need to know today.

The government's desire for continuity. Hummeney has been coordinating military and intelligence

operations at his father's office and he has very close ties to the powerful Islamic Revolutionary

Guard Corps. President Trump has called the younger Hummeney an unacceptable choice. Before the announcement, Trump told ABC that whoever is selected as Iran's next leader is quote, not going to last long without the approval of the United States. And over the weekend, the US and Israel intensified their attacks on Iranian military targets and vital energy

infrastructure is rarely warplanes bombs several fuel depots in and around Tehran,

saying they were being used by Iran's military. The air strikes created an apocalyptic scene in the Capitol, setting off oil fires that turned the horizon orange and blanketed the city with dark oily smoke. Water desalination plants were also struck in Iran and on the Persian Gulf Island of Fawrain threatening to further disrupt the lives of millions in the region,

who depend on desalination for drinking water. Finally, on Sunday evening oil prices surge to

over $100 a barrel for the first time in four years, a worrying sign about the war's potential effect on gas prices. Trump said in a truth social post on Sunday that higher oil prices would be short-lived and called them a quote "very small price to pay for peace." Today's episode was produced by Ricky Nevetsky, Rachelle Banger, Diana Win, Eric Krupke, and Michael Simon Johnson, with help from Mary Wilson. It was edited by Mark George and Lisa

Child, contains music by Marion Lazzano, Rowan Nemisto, and Dan Powell. Our theme music is by Wonderley. This episode was engineered by Alyssa Moxley. That's it for the Daily. I'm Natalie Kitroeth, see you tomorrow.

(gentle music)

Compare and Explore