The Lawfare Podcast
The Lawfare Podcast

Lawfare Daily: The Pentagon Designates Anthropic as a Supply Chain Risk

2h ago54:149,648 words
0:000:00

In a live conversation on March 2, Lawfare Editor in Chief Benjamin Wittes spoke to Lawfare Senior Editor and Research Director Alan Rozenshtein about the Pentagon's designation of AI company Anthropi...

Transcript

EN

The Electronic Communications Privacy Act turns 40 this year, and it's showin...

On Friday, March 6th, Laugh Fair and Georgetown Law are bringing together leading scholars,

practitioners, and former government officials for installing updates to ECPA, a half-day event on what's broken with the statute and how to fix it. The event is free and open to the public in person and online. Visit LaughFairMedia.org/ECPAEVENT. For details and to register.

Being done at Sites in Fall, with HBO Max, Streamer Geschichten wie Game of Thrones, a night of the 7 Kingdom Superman and Fjinn Mayor, up yet, HBO Max. It is completely insane.

β€œTo simultaneously say, this product is so important we're going to force you to give”

it to us. It's so safe that we're going to use it during an active military engagement, and it's so dangerous that we're going to burn you to the ground. So it's quite possible that the people that are in good faith are assuming that when the government says, "Oh yeah, according to law and practice and regulation in this

DOD guideline, we're not the autonomous weapons, like that protects them." And DOD is saying, "Okay, that's great for us, because we can change the guidelines." In a live recording on March 2nd, we talked about the Pentagon's designation of AI company and Thropic as a supply chain risk. The reaction from anthropic and other AI companies and the legal challenges the designation

is surely going to face. I am here with Laugh Air, Senior Editor, Alan Rosenstein, Professor at the University of Minnesota Law School, and Sudden Expert on Procurement Law, Alan, how did you spend your weekend? Reading a lot of Procurement Law, I have to say, I just have to say, the nature of expertise

is really relative.

I really always think of the, in the land of the blind, the one I'd manage is Kig.

And so I spent a fun weekend going from zero to 60, at least 45. I would say, "I would say, you're interested."

β€œSo, what caused you to do a crash course in defense department procurement law?”

So there's this little company that some of us may have heard of called Anthropic. And they make an artificial intelligence systems called Claude. And Anthropic has, actually, for the last few years, been at the front among the major AI labs of working with the government in particular on military and classified systems. And actually, last summer, the company signed a deal with the Pentagon to increase the

Pentagon's usage of those systems, again, on as classified networks and for military purposes. As part of that contract, Anthropic had a couple of what are now being called red lines. Primarily that it systems would not be used for master valence of Americans and also that it systems would not be used for fully autonomous military operations.

β€œAt the time, the defense department clearly was okay with that.”

But in January, Secretary Defense Pete Hexeth put out a memorandum on the military's U.S. AI. And in that, he demanded, or he set the policy, that the military would only allow AI contracts where those contracts permitted, quote, all lawful uses of those systems.

So basically, he got tired of companies imposing sort of additional restrictions on the

use of their systems beyond what was kind of required or permitted under U.S. law. That plus the use of Claude or the reported use of Claude in the operation to capture the Venezuelan president Nicolas Maduro. And some reports that may be some folks in Anthropic asked some questions about that clearly caused some alarm bells to go off in the Pentagon.

And over the last two weeks, we've seen increasingly tense kind of scrolling standoff between the defense department that has been pushing Anthropic to remove usage restrictions from his contract and Anthropic that has been willing to play ball somewhat but has these

Couple of red lines.

And the last week, Hegset threatened to invoke this law called the Defense Production Act,

which would have potentially required Anthropic to provide Claude without the restrictions. But in the end, what ended up happening was that on Friday, President Trump wrote a true social post banning Anthropics use on any government systems. And then soon after Hegset put out a post on X, purporting to designate Anthropic as a quote supply chain risk and we'll get into, I'm sure, what that means in detail, but in

particular banning, any not only Anthropics from government contracts, from DOD contracts,

β€œbut banning, and this is the key, any business that does government contracts from itself”

doing any business with Anthropics. So it's kind of like a secondary boycott, it's, I mean, it's a sanction, it's essentially a sanctions regime against Anthropics. And the reason this is so important is not only does Anthropic really rely on its enterprise customers, many of which also do business with the government, but two of Anthropics'

main cloud compute providers, Amazon and Google, are themselves defense contractors. So if Hegset's designation is read to its utmost, it's effectively a death sentence for Anthropic, because Anthropic loses its capacity for compute. So Anthropic has said that it will, there's a little bit of confusion right now whether or not Hegset has formally designated Anthropic or he's about to designated Anthropic

unsurprisingly, the process in DOD has not been great on this, but at the very least, in response Anthropic, it's put out a statement saying that it would sue in court against any supply chain designation.

β€œSo where we are right now is, I think we're waiting for the formal designation to come”

in and then for Anthropic to run to the nearest courthouse and sue to enjoy that designation.

All right, so first of all, memo to Anthropic, please sue in the district of Columbia.

It would be much more convenient for law fair than, you know, cutting on air for a cost for Anthropicers. You know, we already have Anne of Bauer flying all over the country to Tennessee, to Florida, to Georgia, don't bring California into this. All right, so before we get into the details of the law,

I want to try to isolate what this is really about because Pete Hagseth knew we were about to attack Iran when he did this on Friday, right? This happens Friday evening, the by the time we all wake up on Saturday morning, we are at war with Iran, this so he surprised made a conscious decision and Trump made a conscious decision, let's go to war with an American defense contractor the day before

we go to war with a significant foreign adversary. So they must have cared about this a lot for some reason and yet they have made clear that they don't engage in mass surveillance of Americans and they don't aspire to have they even issued a statement over the weekend that they're not building any fully autonomous weapons. So is this just a chest thumping we're going to beat up on you and make you do what we want

to do for the symbolism of it or do you think there is something the defense department actually wants to do that Anthropic does not want Claude to help with.

β€œSo that is an excellent question so let me first say I think in this administration in particular”

you can never rule out personality driven decision making which is obviously kind of unsatisfying

from like a legal and policy analysis you know like we all are are used to administrations where you know good people bad people whatever they're fundamentally rational actors can kind of game out what they're doing this is just not the case for this administration. So it would not surprise me at all if a lot of this is driven you said ideology I'd maybe just say peak like they just got pissed off because you know some you know nerd with you know

curly hair from San Francisco is purporting to tell you know secretary quote unquote of war Pete Hegg said how to use his war fighters and and you know just as Donald Trump once bragged that the way he set Switzerland tariffs uh is that after he set them the Swiss Prime Minister called him and complained and because quote I didn't like her tone I just doubled them instead of lowering them and to be clear he was bragging that this is how he sets tariff policy

I really think that um and this is a family show so I won't use the analogy I...

I really think there's an element of I'd like to show you that my stick is bigger than your stick

β€œand maybe that is for some rational purpose down the line but I think a lot of it is because”

I just want to show it to you because domination and this kind of symbolic politics is how these people think right and it seems kind of crazy to potentially threaten the very foundation of American AI and we'll talk about why I think that is part of this based on chest thumping but I mean we just want to warn around for reasons that I'm fully understanding I mean this is with the well

with the rational possibility so I think there's a lot of that that might be that but I do think there also may be substantive issues here I think there is an ideological component here that the military does not want its contractors to be telling it how to use its tools and how to set military policy right that you're going to have a military industrial complex it's the military side not the industrial side that should be running it and I should be clear I am actually very

β€œsympathetic to that position and I think it's important even as we're analyzing this sort of”

symbolic policy that peckset is implementing right now to try to abstract a little bit from the personalities involved and say okay but what should the overall relationship between the military and AI and I think there's actually quite defensible that it's the military that you

ultimately make those decisions now that's separate from okay what should the military do if a

company doesn't want to play ball but I think there's an ideological component here and I think that's one worth taking very seriously and then finally it may be the case that the military is trying to build autonomous weapons and do some surveillance that anthropic would be less comfortable with now the military saying it's not doing that but a lot of this depends on how you define fully autonomous and how you define unlawful surveillance and you know I don't know

till you bend that like depending on how you squint you can you can call things mass surveillance or not and so obviously the military does a huge amount of surveillance the NSA is part of the military

and a lot of it's lawful and so maybe they they don't just they they never want to put themselves

in the position of having to call a Dario Amade and ask for permission. Right although I mean I do think that the that software vendors and software as a service vendors are different from other vendors it's not like you know you buy an F-16 from whoever makes the F-16 lock it and then you decide how to use it as a military whenever you're buying software you're actually buying a license to use software not the software itself and that always comes with a long click-through agreement

that is the company's terms right and so why is this any different from Microsoft saying you know and you can't use it to do x y or Microsoft Word to use x y or z yeah so yeah it's a fair point and and and the idea that is going on sort of run is circulating at least you know some parts of the of the of the internet that it's totally unprecedented for a company like and drop it to try to impose conditions that's just not true conditions are imposed all the time that's totally standard now the question is what

is the nature of those conditions so I don't I have not read like the master services agreement between Microsoft and uh in the military I'd be very surprised if it said you can't use Microsoft Word to plan an invasion of Iran or something like that maybe it says that you know you can only use google box

β€œfor exactly exactly exactly exactly good good luck with that um so so I think the real question”

but look I think the real question is not you know is this legitimate or not it's it's in this particular case does the government want to abide by these restrictions the company has every right to insist on them and the government has every right to say no thank you right and like if we were civilized people we were just shake hands and this would not be a story and so this would the way this would resolve is presumably if we were civilized people the government has some

kind of exit term from the contract or simply doesn't renew the contract when it comes back up exactly and it it chooses not to do business with anthropic because the terms are not adequate yeah and then it's you know it's a 24 hour story it kind of interesting you know and then we move on with life all right so the government doesn't do that instead it designates it as a supply chain risk let's pause here and say everybody was expecting them to do something under the defense production

at what could they have done under the defense production at yeah yeah and I was I was quite

Surprised by by this I did not have on my bingo card that I'd be spending the...

studying supply chain risk because it seems so outlandish but here we are so under the defense

β€œproduction act which is a law just a Korean war era statute passed largely to kind of regularized”

what had been done in World War II where the entire kind of economy became a military economy

under some combination of cooperation and co-operation and co-jolling from FDR right the our first

you can just do things president the government can require companies to fast track government contracts that's the most straightforward thing but in addition to that it can require companies to enter into contracts with the government to sell the government standard commercial goods and services and on a particularly extreme reading of the defense production act that has not been tested in court so there's some question about that to even produce new products for the government

and although the defense production act is quite old it's actually been renewed like 51 times it has very short sunset clause and at some point I think 10 or 20 years ago Congress explicitly included you know software and high technology as part of it so the defense production act really is just a if we need a command economy situation a the government can do it it can compel you to produce stuff and be it's got to pay you for it yes and I should say the extent to which it can you know

force a company to produce wholly new things is that that's somewhat unclear but the text is certainly very very broad and my thought was that if the government wanted to require and thorapic at the very least to provide clawed right like this current system but under different contractual terms it could do that pretty easily and while I wasn't a fan of that as a policy

β€œmatter I thought the government had a pretty good legal case so naively I thought that that's what”

the government would would do that is not what the government chose to do all right so what the government chose to do I want to just assert this is closely analogous to what it did to Harvard University what it did to law firms right which is to say you have asserted your rights in this

case rights under a contract in the case of Harvard and law firms and NPR first amendment rights

that we don't like so we are going to take retaliatory action against you using in this case not money which is what they did with the will Harvard but using our ability to prevent other entities from doing business with you prevent you from contracting with the government which is the same money right but it's not a direct it's a little bit more indirect except in the government contracting sense my instinct looking at this is that first of all as a normative evaluation matter

we should be exactly as skeptical of it as we are with Harvard or the law firms or NPR and secondly we need to interrogate which is the point of your article the legal basis for the actions

that they're taking so let's let's take the easy one first is there any reason to think of this in a

different framework from the Harvard action or the law firms action no but I might zoom in just a little bit because I actually think and again it's been a while since I really dug into like the

β€œexact details of the Harvard and law firms stuff I think this is actually more like the law firms”

than it is like the Harvard action because if I understand the main Harvard issue was the with draw of federal funds I mean I guess I guess that's not true as I think about it because I think Harvard also was banned from getting foreign students which is actually a little bit more like what's happening here but but the point I'm trying to make is this isn't just with draw of funds right this is essentially kind of persona non grata in of an entity right and and for Harvard that was done by

restricting international students for the law firms it was done you know but actually very similarly to what's happening now and and yes I I think I think this is the this is the right way of thinking about what is what is happening right it's it's almost sanctions regime attempt against some domestic company right all right so what authority does the government have to let's start with the one where their authority should be stronger point at a company and say

you're a supply chain risk and nobody nobody in the government is allowed to do business with you

Let's hold aside for a minute the secondary sanctions issue government decide...

like Alan Rosenstein ink president issues a tweet or a truth social post that says no

government agency can do business with Alan Rosenstein ink because he's a supply chain risk what

β€œdo we what authority do they have to do that we'll first Benjamin you have to promise that you'll”

frame that for me from my office I will frame it for you thank you thank you so there there are two statutes both from the 2010s one is the federal acquisition supply chain security act fasts uh which is very hard to say yeah it's bad acronym is very bad acronym and the other is the statute 10 USC 32 52 which was initially enacted as part of the 2011 National Defense Authorization Act and then it was made permanent in the 2018 National Defense Authorization Act I mentioned them

both because although it seems that the government is acting under section 32 52 they're still both useful to think about because I think they express kind of how Congress was thinking about the issue of supply chain risk at a certain time in the 2010s they did enact to somewhat

β€œdifferent statutes but I think you can sort of read them together um and that's important because”

the language of both statutes is reasonably broad but um you have to understand the context here but let me just focus on 32 52 which is which is what we all think the government is acting under it's certainly what anthropic things the government is acting under it's what other knowledgeable people think the government is acting under and and the reason is that while the um fasts uh requires like an interagency process and 30 days notice and like a whole thing it's a more regulatory statute that's

really not what's happening here 32 52 this other statute basically allows the Secretary of Defense

essentially on his own authority to basically find that a particular supplier is a supply chain risk and then immediately exclude that supplier from government contracts for code code covered systems basically national security products now I want to pause you right there because when you say it's something as a supply chain risk it does not sound to me like you can say out of one side of your mouth give this tuning on the terms that I want

or out of the other side of your mouth you are a supply chain risk that feels a little bit like I don't know um the food here is terrible and such small portions right I mean yes I mean let

let me try to steal man that argument because I can imagine here's what I can imagine a DOJ

you know federal programs attorney saying in court when the judge quotes any hall to this effect well look we think that under the current contract term regime the use of anthropic is intolerable and and the very idea that we have to call Dario Amade for permission right even that's a possibility is totally intolerable but if you remove the contract risk suddenly it's not a problem anymore look I'm just saying if you had to speak out of both sides of your mouth that is what

you would say but what's the language of the statute I mean if the if the Secretary of Defense finds that what about a product that the terms it which it's being provide the contractual terms are intolerable no that that that this supplier you know is an adversary who's products will quote sabotage subvert or malicious maliciously introduce unwanted function I'm not saying it's a good argument that I'm just saying it seems like that is not like we negotiated a contract that

in retrospect we regret and we don't want to wait until the contractual terms are up to renegotiate it and we don't like Dario Amade's hair look I'm trying to play along here but as our esteemed law fair colleague Annabauer likes to put it we live in the dumbest of all possible timelines

β€œyes of course it is completely insane to simultaneously say this product is so important we're”

going to force you to give it to us it's so safe that we're going to use it during an active military engagement and it's so dangerous that we're going to burn you to the ground yeah you can't have all obviously you can't have all three of those at the same time right it just seems like that dog won't hunt it's it's bad it's bad man all right so on we go what is and thropics argument going to look like that this is I mean beyond what I just said

that this does not cover I mean it sounds to me just read listening to the statute that it's like directed at Caspersky or that it's directed at you know some foreign entity that wants

That you want to keep the US supply chain pure of not an American existing de...

that you have a contract dispute with and am I overstating it I mean funnily enough I along with the fabulous Howard University law student Michael Andrea's published earlier today three and a half thousand word analysis of all the things that thropic I expect we'll say when it's soo's this is a how do we say a target rich environment every layer of this is just a disaster for the guy right so give us an overview of the but it's before we get to the secondary sanctions

problem yes what are the major arguments that are available to anthropic well the first argument

is that it's not actually clear that this law can even in principle to apply to a US company like anthropic now it is true that the law was be fair here the text of the law does not single out foreign companies this is not one of those laws but when you for example read the legislative history of this law of this particular law it's all about the threats of from globalization to supply chains but the topic of course is headquartered in a lovely office building in San Francisco

when you look at the other law the the the fasts to law and again there are different laws but I

β€œI think they're getting at the same thing that law the legislative histories all about”

because first key Huawei and ZTE and then when you look at and think another thing that is worth

mentioning is whereas fasts actually allow gives the target a company some procedural protections a 30 days notice some DC circuit review the 3252 provides essentially no procedural protections now that's fine it doesn't have to provide procedural protections but given that fasts clearly basically only applies to foreign companies it'd be very weird if a law that applies to domestic companies provided less protections than a law that applied pretty clearly to foreign companies

right that's the exact opposite of what you would think because of course domestic companies have due process rights no one is owed a government contract but they are definitely owed some notice and an opportunity to be heard and something reasonable if the government is going to suddenly cancel contracts and especially impose a secondary boycott so it's just not at all clear that as a threshold matter any of the supplies and the reason that's important is because

β€œcourts are generally and I think rightly so load to really second guess the specific”

national security terminations of the executive branch so it's a much stronger argument for anthropic to go in and say it's not that we're not a supply chain risk though I think they can win that argument this just doesn't apply to us this is a classic example it's called ultra virus action where the government is just it's invoking a law that just does not apply to this situation so I think that's just a primary argument a primary argument uh here right

but it is also the case that a court I think will be able to you know under the administrative procedures act review the actual determination for being court arbitrary and capricious and here I think Benjamin the the exact point we were just talking about of you again you can't simultaneously mention of our betrayed approach the definition yeah yeah I mean I may literally use this to teach the concept next year when I teach administrative law and then and then add to it that they've

delayed enforcement for six months so it's like it's so dangerous that it poses a supply chain risk so six months from now we're going to stop using it and ban everybody else from using it

exactly and then finally there are concerns about pretext here and and the pretext comes in

sort of two flavors one is that when you look at the public statements that sect F headset and president Trump have made um they they they are not exactly sort of so reminded we have analyzed anthropic and we have decided that you know on points one two and three no no it's all about how and throughout you know Trump says anthropic is radical left whoa something something and headset insults it a bunch of times it's pretty clearly like they don't

like anthropic they don't like Amade they don't like I don't know whatever ambient leftism that they are imputing to anthropic which actually don't think is accurate but kind of besides the point so so there's a pretext concern there there's another pretext concern which is going to get us to kind of equally interesting sort of side quest maybe we can talk about this later in the conversation about open AI because just a preview very briefly we're going to get to open AI and rock moment

β€œyeah right on the I think very day or something or like basically simultaneously as headset is”

setting fire to anthropic he's also signing an agreement with open AI that and this is where it gets very bizarre that open AI claims is actually as if not more restrictive than the anthropic than what anthropic wanted now we're going to get in us in a few minutes to whether that's sure

Or not but let's assume it's true well then now I'm utterly confused right be...

that anthropic is such a dangerous supply chain risk if open AI which is bragging about how it's

β€œgoing to afford deploy engineers in DOD and impose all the safety stack stuff and it's going to have”

all these red lines that's not a supply chain risk the math does not math and again and we haven't gone to the secondary boycott issue yet you can start to know the right thing film or the right series or we are telling you a story about a young we say we are going to read and tell you a little bit and get a school for hexry and so on there is a man without a name

he will have to pass the world first I think you know it's all right to bring your time

simple with HBO Max streamer stories like Harry Potter a night off the seven kingdoms superman on phion man up yet HBO Max so let's get to the secondary boycott thing let's imagine we were dealing with kaspersky yeah and we were dealing with something that was generally understood to be

β€œa legit supply chain risk again not making any comments about kaspersky but that's how it's”

understood rightly or wrongly so imagine that sect f hexaith had said all right any company that does business with kaspersky even if it's insulated from its business with the defense department can't do business with the defense department does the sect f have the authority to do that

he almost certainly does not so what the sect of does have the authority to do and this makes sense

is he's allowed to say kaspersky or supply chain risk you can't sell your products to us and also anyone who is building a national security product for us cannot use kaspersky as part of that product right and and maybe you could even make the fault following argument that like the nature of kaspersky or the nature of a model like anthropic of cloud is such that you can't kind of isolate those from the business so if you use it anywhere you can't sell us a product maybe you could make

that argument that would be more fact specific but what you definitely cannot do is say and also you can't do any business with kaspersky like you can't provide financial processing to kaspersky right there's nothing there's such a thing about kaspersky that gives you the authority like iipa which does not allow you to impose tariffs but does allow you to designate an entity and say you're not allowed to do business with that entity well it gets even better because this issue has come up

so in the 2019 national defense authorization act there's a section eight eight nine for the open your hand most of page it's a section eight eight nine for those following along

and they are congress basically imposed a full-on secondary boycott congress of Huawei

of any Huawei and ZTE right the two Chinese telecommunications firms so in that situation congress said anyone who uses Huawei and ZTE anywhere in their systems right in a substantial way cannot do business with the government now interestingly even that did not go as far

β€œas what hexeth is reporting to do because remember what hexeth is reporting to do right at least”

based on his his ex posts which apparently is how we do national security policy now is prevent let's say cloud compute providers from selling compute to anthropic right that's actually even beyond what congress did in this section so again all all of this is very strong and by look we haven't even got you mentioned i.e. but mentioned the tariff case we haven't even gotten to the sort of brooding omnipresence in the sky that is the major questions dot right the idea

the idea that you know especially these days with a somewhat conservative Supreme Court we don't read in really dramatic grants of policy making authority to the executive branch that are unclear and again you know burning of american frontier a_i_ company to the ground because you don't like how they contracted with you that's a pretty major question right and we know from the tariff case that the major questions doctrine now does apply to presidential

action and to national security actions purporting to be in the national security space which was a bit of a question prior to last week yeah all right so let's talk about open a_i_ and Elon Musk because we have two different reactions to this demand from headseth from these two Elon Musk says grock will absolutely do anything the government wants it to do and uh open a_i_ says it has a contract that's more restrictive than the one that anthropic

Is in trouble for so what do we actually know about open a_i_i_'s actual cont...

and do we know that there are real restrictions in it yeah well let me just say one thing about grock for a second not super surprising that this is Elon Musk's position

β€œI think you know putting aside my feelings about Elon Musk it is a perfectly coherent position”

I will say it is worth again to everyone involved please try to think more than six months ahead because you know the the no party is in power forever and again you want to be careful about the precedents you set right so you know just just as just as just as every democrat should

always think about what happens when Trump and J.D. events are in power every Republican should

think about what happens when president new summer president a_o_c_ are in power right or when hockey jaffery is just speaker of the house or even or even then yes exactly right um so I'll just leave that there the real question is open a_i_ so this I find to be one of the most bizarre scenarios I have ever witnessed in my time studying a_i_ policy because you have this contract that open a_i_ has signed with the government now open a_i_ has released several important

β€œprovisions from that contract okay they do this in a blog post those provisions I think pretty clearly”

and this is the essentially near unanimous consensus of like all the law types that are engaged on this issue at least on x does not impose meaningful red lights it just does not because essentially what it says is you will not use our systems for autonomous weapons where such use is banned under law policy or practice okay but and even if today it is banned under law policy or practice which I'm not at all clear it is what happens when tomorrow it's not banned that's not a red line

that's just restating all lawful uses similarly you will not use our tools for mass surveillance where that is banned by the fourth amendment and faisa and twelve triple three okay but a ton of mass surveillance as normal people understand it is perfectly legal under the fourth amendment faisa and twelve triple three right again let's have a let's have a totally separate conversation one day about whether we should have a_i_ master valence and how right that's not the question the question is

just what did open a_i_ agreed to okay and the terms of the contract are not public I take it well not the whole contract does these paragraphs are so open I release this and everyone including myself but not at all not just me starts pointing out like guys this what is happening here these are not red lines so then same altmen and I'm going into I'm going to detail here because I really emphasize how important for like the future of technology and the american democratic

experiment is that we get a_i_ right and this this situation is not providing with a lot of confidence same altmen says here we're going to ask me anything on x asked me questions and I'm going to have some of my senior people my like national security person and some engineers come and join and it just gets worse and worse than they're in it because people start asking politely these are not red lines and then you what you're getting is responses saying oh no you know there

are other parts of the contract that actually fix the the the nature of the law at the time we signed it so d_o_d_ can't change its mind but we're not gonna we're not releasing that part of the contract and we're not but I don't know why we're not releasing that part of the contract so just

β€œfrom a comms perspective I think this is honestly kind of a disaster I mean no one needs to”

take my comms advice but but reputation matters here and and I'm afraid and look I should say I know a lot of people open a_i_ I respect a lot of people open a_i_ but I will say open a

do not think is covering itself in glory in terms of pushing back against what has always been

its reputation whether fair or not and it's a little bit of a shady a little bit of a you know talk out of both ends of your mouth actors that's not great the the real obviously the comms issues whatever the real question is well what did open a_i_ agree to and there seems to be three possibilities and I I honestly cannot tell you which one which one it is right one possibility is that open a_i_ has in fact gotten the red lines anthropic wanted and there's some other

part of the contract that will clarify that that's possible now that then raises the question of why we're trying to prevent topic to the ground then right but whatever that's not open a_i_is problem that's possibly number one positive or two is open a_i_i_ is just lying right these are not real lives and turning too cute by half no no no but I I'll I'll I'll yeah or whatever but that that that these are these are not red lines open a_i_ lord is very carefully so as to give it wiggle room

and it's hoping no one notices or third and I think this actually I don't know but this might be

what's going on and this is maybe even scarier open a_i_ thinks it has red lines

The d_o_d_ does not think it has red lines and this happens all the time righ...

very frequently contract drafting you just don't have what's called the meaning of the mind so

β€œpeople think it's it's it's it's different right and so it's quite possible that you know the”

people that are in good faith assuming that when the government says oh yeah according to law and practice in regulation and this d_o_d_ guideline we're not the autonomous weapons like that that protects them and d_o_d_ is saying okay that's great for us because we can change the guidelines but it's completely unclear which of those three it is which is kind of maddening let's let's play a little inside baseball here which is that open a_i_'s council

are serious national security lawyers i'm including former colleagues of yours at n_s_d

people who know their way around terms like mass surveillance and it seems to me hard to believe

that you could have a loophole here big enough to drive a d_o_d_ sized drone through

β€œand that not be apparent to the legal team that negotiated this contract for open a_i”

i mean that that yes i that seems to me to be the most likely answer right you think that's more likely than that they negotiated something that allows them to say they have the red line and allows everybody to know that they don't really but it allows them to say it but whether you call that lying or whether you call it spinning or whether you call it but but but i but i thought that's the same is isn't it just what you said right you have these

brilliant lawyers inside open a_i_ they have access to the best lawyers in the universe no but what they're saying publicly and what they're advising the client could be quite different what they're advising the client is look you go say whatever you want about this contract and it's red lines but these red lines are not enforceable under the terms of the contract and at the end of the day d_o_d_ is going to you know type uh hate chat g_p_t_ can you lose the uh fully autonomous

drone uh against uh robber dog or and have it make decisions about who to kill and there's nothing in the contract that will stop that as long as that is legal under u_s_ law at the time look based on based on what i've seen of the contract that seems to be the most likely outcome the reason that i'm hesitating to say that is because it raises real questions and and honestly disturbing questions about the candor of open a_i_'s public statements

and and and and you know given that you know open a_i_ has for years talked to very big game about you know how dangerous artificial intelligence is and how important is to be a good steward of this it's not it's not i would look i would much rather if it's going to go down the route of x or you know palentier and a real or whatever just to say so right just say so and then we can have that debate right but but but i i i i i i cannot emphasize enough how big of

i think a reputation disaster this is and look i don't think who who they don't need to care what

β€œi think right but i will just say in silicon valley the only thing that's more valuable than”

compute is talent right getting the best engineer is you know possibly a billion dollar or ten

billion dollar asset you know that's why met is paying these people hundred million dollars at the come over a lot of these engineers are motivated by money their humans a lot of these engineers are also motivated by wanting to do the right thing and so i just i you know if i were opening i would really worry that my reputation a top all the other stuff that's been happening is going to be very seriously and durably harmed in this very very small community of you know elite AI engineers

in in in in San Francisco and in the fact that they have not resolved this it is i find puzzle all right so i want to go back to a point that you made in an earlier piece that you wrote which was leave aside the merits of this dispute this is a truly horrible way to make the rules under which the u.s. industry are going to interact with the defense department over major policy questions and so i want you to flesh that out because it's kind of laying in the background of

a lot of what we're talking about but what should be the mechanism by which we decide as a society whether an anthropic should or should not have its product used for you know autonomous weapons and for mass surveillance why is contract dispute not the right answer to that question

And i should say i mean contract dispute is the right answer of how you opera...

it's just a weird vehicle to set the the the the substantive principles let me separate your question into a kind of a substantive component and a procedural component so the substantive component is that and this is you know as much as it is to like as much as much fun as it is to sit in like

β€œmock p-hexettes incompetence i think it is useful to try to zoom out a little bit and kind of”

think about this issue in the broader context the way i view this is that this is the opening shot

in what will be by far the most important a_i_ regulation question of the next several years

which is to what extent will be nationalized the a_i_ industry okay it was never going to be realistic and people have thought about a_i_ for much longer than i have i think have understood this that the government which is to say the people through their elected representatives and bureaucrats we're gonna sit in d_c_ and twittle their thumbs and go oh how interesting while a very small group of people in San Francisco we're going to build the machine god

that that was just never going to be realistic right and so we as a society going to have to figure out one way or the other how much control we're going to have over these companies right and all the regulations about you know privacy in this and that in labor and corporate they're important

β€œbut i think they pale in comparison to this more foundational question can it kill you well but”

also just at the end of the day how much are we going to say this is a cool product made by the private sector versus this is like an apical transformation and human civilization and therefore we're gonna have to we're gonna have to control it a little more in this but my point is it's not an

accident that we confront that question yes that we don't confront that question ultimately over

will it discriminate against you can it's you know can it judge you for real estate transaction and make right all these kind of consumer protection things where the rubber hits the road and the defense department says no we're in charge of this and the AI people say no we are is when it comes down to can it kill you yes kill the robots have a fabulous way of focusing on the mind that's what's right in the mind okay this is a legitimately difficult question right and and you know we'll

be thinking and writing about this for a long time but that's one question there's and there's then there's a separate question okay whatever the answer to that question is who's going to determine that right right now this is being determined in like not a great way right through a a very unpleasant contract dispute between you know not the best secretary of defense that you could imagine right and the CEO of a company no look I'll be honest I like anthropic you know I

know some people there I met some of the co-founders I've never met Dario but he seems like a really

thoughtful and interesting guy if someone's going to build the machine god I can imagine worse people to do it than anthropic but like I didn't vote for like rock like rock right but look I didn't vote for any of these people right I'm not comfortable with them setting broad societal policy right but I'm also not comfortable with like the current you know people in the pentagon of the White House doing it either because they're not great no we do have an institution that is

supposed to do this it is called the United States Congress and you know just saying that phrase should feel everyone with a certain degree of existential dread and malaise but at the end of the day like this is going to be this is for this way this is not going to be how this is going to be decided who knows how this is going to be decided but under any rational system this Congress is would be the one who would decide it right it would be Congress that would say hey here's how

DOD you can and cannot use autonomous weapons here's how you can and cannot do surveillance here is the ongoing oversight that you know our services committees and the intelligence committees are going to do and I would imagine that had Congress done that or had Congress even shown any sign that it will do this in some rational way over the next few years someone like Dario Amade would feel a lot better about signing onto an all-lawful usage right policy right because look you know

as as much as anthropic as often criticized for you know kind of holier than that we know what's best and maybe there's some of that I actually don't think at the end of the day they want to be the

β€œones doing this like I think they want to go in like care cancer or something and they would”

much rather have the democratic process come to some reasonable resolution that even if they don't agree with in every particular they can live with but in the absence of that this is how we do this and it's it's bad all right let's wrap up with the question of what's going to happen so we assume at some point there will be a document that translates the Trump truth social and the

Headset statement into some kind of policy or some kind of action I mean have...

done anything yet or have they merely said they are doing something so it's it's unclear so so there's

the Trump truth social posts that orders just U.S. government agencies to not do business with

β€œanthropic there's no secondary boycott there and some agencies are doing that I think treasury”

and like like the mortgage the housing mortgage agencies are are doing that but they're just doing that like kind of as a truth social executive order thing maybe that'll be challenging but that's kind of separate then there's a headset post which is a little confusing because it ordered some undersecretary of defense to do the designation but then it said effective immediately no one can do business with anthropic which certainly sounds to me as if hexette was purporting to do a

designation now the law does not require the law requires hexette to make some written findings

and to transmit those findings to Congress in classified or on classified form it does not oddly enough

seem to require the executive to tell the company I think that's gotta be a drafting oversight because how's the company supposed to know mind or standing is that anthropic has not yet received a piece of paper but they're certainly acting as if this is real and they have said explicitly that they will challenge this in court so let's assume the procedural stuff gets resolved one where the other what I would expect is going to happen is that anthropic will go to court and hopefully

β€œon the east coast not the west coast but you know a lot law fair travel aside you should do it in”

the place where you're supposed to do it and they'll say guys this is really illegal here's a law fair post like here's our brief it's like he's a nice law fair post right and I would

expect that if they get a remotely within parameters judge that there will be a and I should say

I'm not a litigator so I don't know the exact details of how this works but they'll get a temporary restraining order they'll get an injunction will get something or other and this will all drag and this will all kind of pause at which point there will begin a lot of litigation I expect that anthropic will win that litigation for all the reasons we talked about and I think it's so obvious that they will win that it almost that I almost wonder if assuming the government

would almost be de-escalatory of anthropic because I suspect that what may very well happen and I say this I would just note that I think an hour ago the Wall Street Journal reported that the White House or the government has decided not to appeal the injunctions of the law firm punishments that it tried to do a long time ago with drawing the appeal it's withdrawing the appeals I suspect some of that could happen here where the White House is going to you know

designate them a supply chain risk so that they can beat their chest this will all be stopped and then everyone will lose interest in this in a few months and maybe even people will kiss and make up because as we can see from anthropic services being used in the word against Iran these are useful services now from anthropic's position I mean that's obviously a better outcome than them losing the supply chain risk but it's still very dangerous because in anthropic although

it's it's an absolutely a leader in AI you know unlike a company like Google or Medellet say which has kind of an infinite cash machine through advertisements that it can use to sort of shovel money into the money pit that is AI anthropic is just an AI company right and it's a very young one and it's a very young one right and so it's not like its relationships with its clients are super super deep and and even if it loses a small number of clients who are just scared away

by all this noise coming out of the White House you know that could meaningfully set its its AI

β€œback right and and I'll just note something that Daria Amade said in a podcast I think this was”

doing with Duarkesh Patel a few weeks ago or last month you know he said I'm very bullish on AI we're going to get general intelligence we're going to do all the things but also if something gets screwed up for 12 months we could go bankrupt right which is to say they're margins here which is weird to think of a company that has hundreds of billions of dollars in revenue but every cent of that gets plowed back into compute and training so from anthropic's position what I think

is really scary is not that they lose the lawsuit but that this does enough damage their enterprise relationships that it you know wounds them permanently I mean I'm an I'm optimistic I think they'll get a lot of goodwill as well out of this the prediction markets seem to not think that anthropic is going to be severely hurt by this and you know take that for what it's worth but I think the risk for them is is much more business risk than it is fundamentally legal risk but I'm just

a lawyer so I focus on the legal record we are going to leave it there Alan Rosenstein thank you for joining us today my pleasure the law fair podcast is produced by the law fair institute you can get ad free versions of this and other law fair podcasts by becoming a material supporter of

Law fair at our website law fairmedia.

other content available only to our supporters the podcast is edited by Jan Patia and our theme

β€œmusic is from alibi music as always thanks for listening”

you can start with eventually after the right film or the right series or we are going to tell you

about a story over a week. if you like it we will then we will then come back and tell you again

β€œthis time in your tent if you don't like it then we don't want to bring it to a very cool place”

with the freedom and your life to think. for bring your time to full with HBO Max stream

a story like Game of Thrones a night of the 7 Kingdom Superman and a lot more up yet HBO Max

Compare and Explore