I'm opening up Crossplay, I've been playing against Dan, my colleague at the ...
Kat's played another move. She played Stu for 36 points. I've got a Z, which is 10 points. I'm guessing Tenga is not a word, let's see. Tenga is a word.
Oh, Dan played his last turn, let's see who won. It's so close, but I did win. New York Times game subscribers get full access to Crossplay.
Our first two player word game.
Subscribe now for a special offer on all of our games. Casey, where are you? That beautiful background does not look like your house. We have a lot to talk about today, because it has been a very crazy 48-hour period.
“I think we've got a lot to talk about today,”
because it has been a very crazy 48-hour period. We've got a lot to talk about today. We've got a lot to talk about today, because it has been a very crazy 48-hour period. We've got a lot to talk about today, because it has been a very crazy 48-hour period. In the AI industry, and this dispute between the Pentagon and Anthropic,
and now open AI came out of nowhere at the 11th hour. It has been truly and insane day and a half in my life. How has it been for you? Well, let me put it this way, listeners have, imagine you get engaged. And then one week later, your fiance is declared a supply chain risk.
So, yeah, it's been a really, really crazy few hours over here as well. And just because we are going to talk about Anthropic and Open AI, and all of this today, we should make our AI disclosure as mine is that I work for the New York Times, which is suing Open AI, Microsoft, and perplexity over alleged copyright violations. Yes, and if you miss the other big breaking Anthropic story from over the past week,
the man that I am now engaged to works there. Well, where should we start, Casey? Well, look, I think if you're tuning in, maybe you've heard the biggest headlines,
“but I think it's worth hitting you with maybe just a few key bullet points.”
One is that in the story that we've been covering over the past couple of episodes, it has come to the point of crisis where Anthropic said it had two red lines that it would not cross. The Pentagon said that it was going to move to declare the company a supply chain risk, and then somehow within 24 hours of that happening, Sam Altman and Open AI swooped in and signed a deal that they say will observe those safeguards.
And so it was just a truly chaotic 24 hours and we should dig into it. Yes, and none of this has been happening through like normal diplomatic channels.
Basically, as far as I can tell, the entirety of this conflict has been contained in like a handful of posts on X
and a handful of blog posts and some stuff that has been leaking out from either side. So I have been making calls for the last two days to the people who are involved in this situation, trying to get some information. And I've gotten a little bit and I'll happily share that with you, but I would say confusion reigns.
Like no one, even the people who are directly involved in this situation, are confused about the details here.
“And so I think we should also just say upfront that like there is still a lot that is unknown about what's going on right now.”
Absolutely. Maybe to start Kevin, we could go back to a part of the story that I think is pretty well known, which is just sort of what happened between Anthropic and the Pentagon,
particularly in those final hours where the Pentagon finally said,
hey, this isn't going to work. We're not going to give you what you want. And time ran out and they did not come to an agreement. Yeah, this escalation started on Thursday, February 26th, when basically there was a day left until this deadline
that the Pentagon had given Anthropic. And Darryu Ahmeday, the CEO of Anthropic, put out a statement on Anthropic's website. Basically saying, we are not going to compromise no matter what on these two exceptions that we want. Mass domestic surveillance and fully autonomous weapons. He explained why they were going to compromise on those.
And then he said in the line that a lot of people have been quoting that, "These threats do not change our position. We cannot in good conscience as seed to their request." Basically, we have been trying to work out a deal while preserving these exceptions that are very important to us, but we have not been able to do so.
And probably worth saying Kevin, that I think a reason that quote stood out so much,
Was that I cannot remember any tech leader in vocing conscience as a reason n...
since Trump has been reelected.
“So it felt like a shift in tone for the whole discussion around tech and power”
and just something we have not seen from Silicon Valley in a while. Yes. And what I understand from talking with folks close to the situation is that even after this post from Dario Amade, there were discussions happening between the Pentagon and people from Anthropic. They were trying to work out the contours of a deal.
There was some sort of willingness to at least change some of the language around these exceptions. But while these discussions are happening in the back channels between the officials at the Pentagon and the people at Anthropic, President Trump posts a statement on true social late Friday afternoon, just before this deadline that the Pentagon had given Anthropic.
He said that quote, "The United States of America will never allow a radical left woke company
to dictate how our great military fights and wins wars." He also said that he was directing every federal agency in the United States government immediately sees all use of Anthropic's technology with a six month phase out period, basically for federal agencies to switch from using Claude to using other models. One thing the president did not mention is this idea of declaring Anthropic a supply chain risk.
This is something that we talked about on the last show. Basically, this is a much stricter designation, something that we don't think has ever been applied to a major American company before. It's usually used for Chinese chip suppliers or things like the Kaspersky Lab. But Trump did not say that he was going to designate the company a risk to the supply chain
“and so I think some folks at Anthropic and elsewhere thought, "Okay, this is like a deal that we can live with.”
We are going to lose our government contracts, but we're not going to be declared essentially an enemy of the state." And more than that Kevin, he also did not invoke the Defense Production Act, right? Which like to me was the true worst case scenario here where the United States government would effectively have nationalized or partly nationalized Anthropic and forced it to make a version of Claude that did its bidding. So when I saw the truth social post, my initial thought was like, "Okay, maybe they're just going to walk away from this whole debacle and try to save some face."
Yes, it did look like that. And then a little over an hour after Trump's true social post, Pete Higgs-Seth, the Defense Secretary, posted his own take on the matter on X. In which he said that he was directing the department to designate Anthropic a supply chain risk. He said, "Effectively, immediately, no contractor supplier or partner that does business with the United States military may conduct any commercial activity with Anthropic." So this was a pretty severe escalation.
And the people who thought, "Okay, maybe Anthropic is going to get away here with not being declared a supply chain risk, thought maybe they're not after all."
“Yeah, now, at the moment of this recording, so far, the only evidence that we have that the Pentagon plans to declare Anthropic a supply chain risk is this social media post, right?”
Like, my understanding is that Anthropic has not been informed of any new proceeding against the company, Anthropic says they would fight it in court. So while this may happen, and we should talk about what it would mean if it does for the moment, it also appears like it could just be a threat. So meanwhile, while all of this is going on between Anthropic and the Pentagon, Open AI has been working on its own deal with the Pentagon to use its models inside the government's classified networks. There have been some reporting on a leaked message that Sam Altman had sent to Open AI employees on Thursday,
basically indicating that they were standing in solidarity with Anthropic, which is very unusual because these companies do not like each other and their leaders have not had a long history with each other.
But basically, he was saying to Open AI's employees, we are not going to sort of cave on these exceptions.
Either we are committed to not having our models used for mass domestic surveillance or fully autonomous weapons and actually saying some sort of supportive things about Anthropic. But a day later on Friday night, after this whole deal between Anthropic and the Pentagon had blown up in spectacular fashion, Sam Altman went on X and posted that Open AI had reached an agreement with the Pentagon to deploy our models in their classified network. Basically saying, we have confidence that our models will not be used for domestic mass surveillance and autonomous weapon systems,
and that the Pentagon had agreed with those principles and then they put them into our deal.
Those are the events of the past couple of days, and I think when I summarize...
that claim to have identical red lines when it comes to the use of their products by the military,
mass domestic surveillance and fully autonomous weapons. One of them, Anthropic, has been declared a supply chain risk, which is a very punitive harsh measure that basically requires them to cut off all business with the US military and the federal government. The other Open AI just announced a deal with the Pentagon to use its systems in classified networks with the same two red lines that Anthropic had objected over.
“There's some nuance there, there's some details that I'm sure we'll get into, but I think if you just sort of zoom out and look at the facts of the case, it is a truly insane series of events.”
It is, and I think we should just talk now, Kevin, about this nuance that you bring up. You know, we said at the top of the show, there is some uncertainty here.
Kevin and I have not been allowed to review the contracts that Anthropic and Open AI have with the military, although we would love to, we're hard for it and why times.com. But I think what we can tell you is that it appears that this conflict comes down to this all-lawful use standard, right? Keep in mind, the Pentagon signed a deal with the Anthropic that had in place the red lines that it is now freaking out about. It went back to its AI labs and it said, "Hey, we want to change this. We want you to say we can use this for anything that is legal."
On paper, that sounds great. Here's the problem. We don't meaningfully regulate the use of AI in this country, and as we talked about on the show in the past, we do not have a national privacy law.
These are among the reasons that Anthropic has become very concerned about how powerful AI systems might do if they were given to the military in a country where there are not actually laws around how this powerful new technology can be used. And I think domestic surveillance one is a really interesting one, Kevin. The Pentagon is said, "Well, we're not going to domestically surveil people. That's illegal." Well, at the same time, there are other federal agencies right now that have mounted what amounts to a social media drag net looking through the social media posts, a people trying to immigrate to this country, trying to find posts that are critical of the administration and then using that as a pretext not to allow them to immigrate, right?
Now, maybe the Pentagon will say, "Well, you know, that's not surveillance, you know, that's part of our immigration process."
“But I think to folks at Anthropic, they would say, "Well, no, no, no, if we know how powerful tools that can go through every social media post in real time, that might be an area that we are uncomfortable getting into."”
And so, this is where I think we start to understand what is different between Anthropic and OpenAI here, right? Is Anthropic as said, "We're serious about this stuff and I'm sure it's possible to write into a contract a little bit of legal ease that gives them enough cover to go back to their employees." I say, "Hey, don't worry, we're not going to do anything untoward." Well, at the same time doing a little wink wink, not to nudge to the Pentagon and the Pentagon could do these tools to do exactly what they're doing with the social media accounts of would be immigrants, right?
And so, to me, that is what I see happening here and it seems like a significant part of the conflict, Kevin, I know you've been on the phone like all weekend, what are you making that analysis? Yeah, I think that's largely my understanding when he announced the agreement that they had made with the Pentagon, Sam Altman, did put out a statement that left some room for interpretation, I think, on what OpenAI had actually agreed to, so I will be very curious to see the actual language of these contracts if that ever makes it out into public.
“Again, we are hard forakeninwaytimes.com, but what I can tell you from talking with folks on all sides of this over the past couple of days is that OpenAI is framing this as essentially an identical set of constraints, right?”
They don't believe that they have agreed to anything that would require them to use their models for mass-mastic surveillance or for autonomous weapons. But in his statement, Altman said that the Pentagon, quote, agrees with these principles, reflects them in law and policy, and we put them into our agreement. So basically, if you kind of parse that very carefully, he is just saying sort of what the Pentagon has been saying, which is that they're not going to do mass-domestic surveillance because it is illegal.
And what anthropic has been insisting on this whole time is that actually there are forms of mass-domestic surveillance that are not illegal as the law is currently written, and so we want to prohibit the use of our systems for that stuff too. More than that, Umadeh also said that during their negotiations, anthropic was offered similar concessions, but the Pentagon accompanied those proposed concessions, with quote, "leaguelies" that would have made them ineffective, which is entirely consistent with what the undersecretaries of this agency are saying on X, which is that they were not going to let any private company dictate how they wage war.
Right, so I just think that's very important to say is that anthropic is tell...
Yeah, I mean, I think when you boil it all down, there are basically two options here.
One is that the administration and the Pentagon just have a political vendetta against anthropic. There's a bunch of language in the statements coming out of Pentagon officials, ex-accounts about how these are all, you know, a bunch of woke liberals who are unpatriotic, and I think there is some sort of sense in which this is just about style and tone and personality.
“A meal Michael, one of the undersecretaries at the Pentagon who's been negotiating this deal, just clearly does not like Dario, Amadei at all.”
And I've heard that from multiple people actually that there's like particularly bad blood between those two. And so I think that's option one is like, this is purely a political vendetta, open AI has been chosen for this contract because the administration likes them more, and there's sort of no substantive difference between what these two companies have agreed to. The other option is that open AI has actually agreed to things that anthropic didn't, that there are substantive differences between these agreements and that open AI is sort of using this sort of legal ease as you put it to sort of frame this as a victory when really they have conceded to the thing that anthropic objected to.
I'm not sure yet which of those two is more true, but I don't think anyone in this situation except maybe the secretary or defense knows.
Yeah, you know, I mean, there are two really important things about what you just said Kevin, one is the idea that the federal government is trying to commit what Dean Ball, who was a member of the Trump administration and helped to write its current AI policy. What Dean Ball called an attempted corporate murder just based on ideology and man, if you live through the bias and censorship debates on social media of the early 2020s, it's really crazy to hear elected officials saying that because we have a different ideology than you, we are going to take your contract away.
“Designate you a supply chain risk and try to prevent other people working for you, right? So that is just honestly Kevin, that is how the Chinese government regulates its tech companies either you get on board with a party or they crush you, right?”
So that I think is really chilling and again, not just to me to former members of the Trump administration, okay? That feels really important to say. Do you absolutely. No, I've been looking back through sort of historical examples of the US government taking punitive actions against American companies. And I think it's safe to say that this fight with anthropic and the Pentagon is by a fairly wide margin, the most punitive action that the US government has taken against a major American company, at least this century and possibly ever.
We have seen this administration bully and strong arm and job-owned companies in the tech sector before we have even seen them try to block certain companies from doing business with the government. But we have not seen them try to kill a company for what as far as I can tell are contractual disputes and ideological differences. But of course, this is why almost all of Silicon Valley has lurched to the right over the past two years. It's why Tim Cook is giving golden trophies to President Trump.
It's why Greg Brockman at Open AI is donating $25 million to Trump's political action committee, right?
“There is this sense that you have to be in line with these people or they're going to try and crush you until now though, we haven't actually tried to see the Trump administration try to crush a company.”
But now we have and I just sort of can't imagine what kind of chilling effect that is going to have across Silicon Valley. Casey, I want to get your take on the employee activism that we've seen over the last couple of days. There was an open letter petition, whatever you want to call it, going around, that was signed by some employees of Open AI and Google DeepMind and other leading AI companies basically saying like we stand with anthropic. We also do not want to make tools for mass domestic surveillance and autonomous killing and sort of expressing solidarity with the stance that Dario Ambeday has taken.
Do you think that's meaningful? Do you think that's part of what is fueling some of the decisions that these companies are making? Because that has been true in the past employees that these companies have had a lot of leverage over things like military contracts. Do you think it is very meaningful? There are a lot of very well-meaning people at Open AI, at Google, at DeepMind, as well as anthropic, who truly do not want to see the most dystopian possible AI scenarios come to pass. So it matters that they are going to their leadership and saying we are not going to participate in this. I hope that those employees get a hold of the contracts that their employers are signing and really scrutinize them. I hope that they take note if they find out that their technology actually is being used for something that looks pretty domestic surveillance like that they would blow the whistle.
We really are going to need to rely on these employees in the coming years as...
Yeah, I think one other important thing to note here is that Sam Altman and Open AI are trying to very carefully explain this to their employees in a way that does not suggest that they are just capitulating to the demands of the Pentagon.
“Open AI is saying to its own employees that they believe they got actually a stronger deal than the one anthropic had in terms of protecting against mass domestic surveillance and the use of their systems for autonomous weapons.”
Several people pointed me to this sort of line in Sam Altman's post about this about how they were going to create what he called a safety stack basically a set of protections built into the model itself that the Pentagon is going to be using in classified situations that would essentially prevent the use of chat to PT presumably for the things that they're worried about. Yeah, by the way, this is the same company that told us it was going to build a safeguards to make sure that Sora couldn't be used to make images of Brian Kranston Kevin.
So I'm just going to suggest that sometimes when the Open AI tells you it's going to build guardrails, they don't actually show up on time.
Yeah, I've also talked to people who say that this is basically security theater that if you dump a bunch of data that you've collected on Americans or purchased from a data broker into an AI model like it is not going to be able to tell whether that information was legally gathered.
“It is not going to be able to tell where that information came from and so this is not really a meaningful change.”
Yeah, let me underscore that point Kevin because it is so important. It is legal for data broker companies to buy up data on millions of Americans and it is also legal for federal agencies to buy that data. Now that does not constitute domestic surveillance to a legal standard, but it is functionally equivalent, right? So this is the whole ball game here, right? The Pentagon already has all of the tools it needs to do what is practically domestic surveillance. It's just not called that because it's legal to buy data about Americans from data brokers. So I understand we are so deep in the weeds here, but the reason we wanted to do this episode today is to try to persuade you.
This is very high stakes stuff. It is being done in the shadows and the nuance is really, really matter.
“Yeah, I think the details and nuances are where the whole story lies right now and it's hugely high stakes.”
And so I think on the surface, this might look like some kind of boring contractual debate between AI companies. But this is really about the sort of fundamental question of who controls technology. Is it the people who build the technology or is it the militaries and the governments of the countries where that technology is built? And I think that is sort of the high level question under debate here and it's one where the Pentagon and anthropic did not see I die.
I mean, this story, Kevin, is the whole reason that you and I have just never been on the side of AI is all hype and it's fake and it's above all that's about to collapse, right?
We saw these systems improving in real time. We knew that very soon they would be in a position where they could do the sort of instant analysis of things like social media data, geo location data and other data that could just potentially create massive new systems of oppression. And we are now on the precipice of those systems being potentially rolled out under the guys of a policy that is called all lawful use because there is no law to regulate them. So it really just could not be more serious and I'm glad we're getting a chance to talk about it today.
I want to bring up one more thing though, which is the limb that Sam Altman may have just crawled out on, right? As I'm reading through his statement, I'm trying to square it with what I know, you know, you're talking earlier on this show.
It's like, okay, so you're telling me that the same day the Pentagon tries to kick one company out for having two things that it will never do.
It signs a deal with another company and makes an agreement that it will never do two things. It's so hard to square that, right? And yet you and I have both covered Sam for a long time and we know that a criticism he has gotten from his former co-workers is he tells people what they want to hear, right? This was at the root of him being fired in 2023 was his co-worker saying, this guy is telling me what I want to hear he's not being consistently candid and he's just sort of leaving me in this state of perpetual confusion.
And so now we fast forward to a moment that is so much higher stakes than that, right? Because we have to take Sam Altman's word that he has signed a deal that will not enable mass domestic surveillance of Americans in the short term and maybe autonomous murder bots in the medium term, which is what I don't know three years five years who knows.
The reason that I note that though Kevin is that in every case it is always c...
I hope the truth is that somehow he arm wrestle peat heads up down and peat heads that said, okay, you got me Altman, we're not going to do any domestic surveillance for real and we're not going to do any autonomous murder bots for real. My fear is though that either through naivete or deception, he has misled us and we're going to find out sooner or later that in fact those two use cases are not only legal but they're happening.
I think that's still a big TBD and I would also like to know as Sam if you're listening, please come on and talk to us about this.
“Because I think there are still a lot of unknowns here, but I would also bring up another point, which is, you know, one of the big criticisms of anthropic over the years has been about this idea of regulatory capture, right?”
There are many people including some very high up in the Trump administration who believe that all of anthropic sort of warnings and statements about the risks of powerful AI systems, the speed with which they're accelerating, the things that they could potentially do have been kind of a pretext, right? That they're not actually sincere about this, that they're just trying to get a bunch of onerous regulation past so that they can sort of enshrine their status as an incumbent and prevent smaller startups and others from competing with them.
“So we've heard that term a lot, regulatory capture. This to me is an example of regulatory capture, right? This is a company open AI coming into a very hot dispute between their biggest rival and the United States government.”
And effectively using what seemed to be vibes, charm, possibly some, you know, some better political instincts to get a deal done through their relationships with the government.
So call it what you want, call it, you know, savvy, politicking or negotiating, call it, you know, hair splitting over the deals of this contract, but this is effectively a company realizing that if it wants to do business with the US government, it has to essentially abide by the terms that the US government has set that is regulatory capture as textbook an example is your ever going to see.
“Yeah. So where we go from here, Kev? So I think there are a bunch of unresolved questions that I'm going to be looking at over the next few weeks and months.”
One of them is like, what actually happens to this supply chain risk designation? This is something that the Pentagon has said it's going to do to anthropic.
But we have not actually seen a formal language about that other than Pete Hague sets posts and we have also not fully understood what that actually would mean for anthropic or what kinds of relationships it would be forced to sever with various other government contractors. One bucket of unknowns is like all the legal and contractual details of this supply chain risk designation for anthropic. We also still have a lot to learn about what the other AI companies are being asked to agree to that anthropic wouldn't and what companies like open AI may have done to get their deal through while anthropics was being rejected.
And then I think there's a third bucket which is like, what does this do to the popularity of these companies with consumers? You know, I think we are starting to see very early signs that some consumers who are very upset about the Pentagon's demands here are switching from chat GPT to Claude. One of those users appears to have been Katy Perry, the pop star who posted a screenshot on X of her Claude Pro plan that she had newly purchased circled with a little red heart. So Katy Perry really said the anthropic employees, those are my California girls and they're at the level. I should also say like I have to underscore that this is exactly the kind of moral conflict that Dario Abode has been preparing for his entire life.
One of Dario's favorite books, a book that he used to buy for all anthropic employees is called the Making of the Atomic Bomb. It's a very long history of the Manhattan Project during World War II. And the reason that he wanted anthropic employees to read this book is that he believed that eventually what they were building, the AI models, the chat bots, would become as important to national security, to the government, to the future of the global order as nuclear weapons. And he wanted to sort of instill in them the idea that like they were doing something with profound moral and ethical consequences.
He understood that it's not just like building technology that if you build s...
And so I think this is exactly the shape of conflict that he was envisioning when he was telling people to read this book about the Manhattan Project.
“I think you're exactly right. It has been so amazing, honestly, to watch how many predictions that were made by like the rationalists and the less wrong community in the early 2010s have started to come true.”
They sort of conflicts between the government and the big AI labs, while they were not predicted with any degree of specificity, there was still a thought that we were going to get here. And now it sort of seems like that moment has arrived. I'm sure it must feel extremely surreal to Dario as well as many other people who have been working on this for a long time. I just hope that we can navigate out of it safely.
Well, truly unprecedented, 48 hours or so, I'm sure a lot more is going to unfold in the days ahead.
And I'm sure we'll be returning to the subject here on Hartford, but perhaps by then I'll be out of this skisha lay.
“Yeah, I hope you make it down safely. And I think you should go skiing. I know you're not a fan, but I think you should do it.”
If you knew where my center of gravity was, you would know that Kevin Rus just tried to kill me live on air.
I'm Johnson Jones. I'm a reporter and meteorologist at the New York Times. For about two decades, I've been covering extreme weather, which is getting worse because of climate change. And it's becoming more important to get timely and accurate weather information. That's why we send these customized newsletters letting you know up to three days in advance about extreme weather that could impact you or a place you care about.
“At the times, you can be confident that everything we publish is based off the most accurate scientific and vetant information available to us because we want you to be able to make real time decisions about how to go about your life.”
This is the kind of work that makes subscribing to the New York Times so valuable, and it's how you can support fact-based independent journalism. So if you'd like to subscribe, go to nytimes.com/subscribe. [Music] Hard fork is produced by Winnie Jones and Rachel Cone, reedited by Vierne Povic. Today's show is engineered by Katie McMurren. Our executive producer is Jen Poionk.
Original music by Alyssa Moxley and Dan Powell. Video production by Sawyer Rokke, Pat Gunther, Jake Nichol and Chris Shot. You can watch this whole episode on YouTube at YouTube.com/hardfork. Special thanks to Paula Shuman, Wewing Tam and Dalia Hadad. You can email us at [email protected] with your AI headlines. [Music]
[BLANK_AUDIO]

