EconTalk
EconTalk

Claude, War, and the State of the Republic (with Dean Ball)

21h ago1:17:2311,818 words
0:000:00

The Department of War wanted to deploy Anthropic's Claude for "all lawful use." What begins as a policy dispute between a tech company and the Department of War quietly unfolds into something far more...

Transcript

EN

(upbeat music)

- Welcome to Econ Talk Conversations for the Curious,

part of the Library of Economics and Liberty.

I'm your host, Russ Roberts of Shalom College in Jerusalem and Stanford University's Hoover Institution. Go to econtalk.org where you can subscribe, comment on this episode and find links down the information related to today's conversation.

You'll also find archives with every episode we've done going back to 2006. Our email address is [email protected]. We'd love to hear from you. (upbeat music)

- Today's March 12th, 2026 and my guest is Dean Ball. Dean is the Senior Fellow at the Foundation for American Innovation, Policy Fellow at Fathom. And on the other of the AI focused newsletter,

Hyper-Dimensional, which you can find on Substack.

He works on technological change, institutional evolution of the future governance and prior to this, he served as Senior Policy Advisor for AI for artificial intelligence and emerging technology at the White House Office

of Science and Technology Policy, where he was the primary staff director of America's AI Action Plan. Dean Welcome to Econ Talk. - Thank you so much for having me, Russ.

Our topic for today is the relationship between private companies working on AI, like Anthropic, which created the LLAM, the large language model known as Claude and the Department of War.

In particular, we're gonna talk about the region clash between the two over what will govern or constrain, lodges by the military, which created, and what you wanna call it, a Bruhaha, a dust stop or a very serious constitutional issue

about the interaction between private entities and the federal government.

And that's what we're going to talk about today.

Our conversation is based on a superb article you wrote on your sub-stack hyperdimensional, which we will link to, that article was simply called Claude, C-L-A-W-E-D for a clever. So let's start with what happened,

what was the nature of this conflict, and what are some of the issues that are involved? - So I think to understand this conflict and fall, you need to go back about 18 months to the tail end of the Biden administration,

in the summer of 2024, the Department of Defense now, Department of War, approaches anthropic and they agree to a contract for the use of the large language model Claude in classified contexts.

That's distinct from the unclassified uses, right? So the Department of Defense and many other government agencies have access to LLMs for all kinds of mundane uses, right? Contract review and procurement, navigating HR rules,

and government has lots of lots of complex internal rules

that just affect the agency. And so you need an LLM to navigate that, right? Things like that, this is different. This is like intelligence analysis, potentially targeting, in active combat zones,

selecting or at least recommending targets for human reviewers to things of that sort. So that starts in the summer of 2024, and in that contract, the Biden administration agreed to use its restrictions, a wide variety of usage restrictions,

as I understand it. But two in particular were on domestic mastervalence and the use of AI and autonomously Thelweapons, autonomously Thelweapons being defined as weapons

that can autonomously basically identify a target,

track it and kill it with no human intervention. So this would be machines killing humans on human instructions, but without human oversight. And so those two things were disallowed in this contract. The Department of Defense agreed to that.

In the summer of 2025 during the Trump administration, the Department of Defense still was not yet called the Department of War at that time. The Trump Department of Defense expanded this contract by a significant amount, this was publicly announced.

And when they did that, it was up to $200,000 contract with anthropic. And when they did that, they renewed the contract with the same very similar contract, and it did have the same use of restrictions

on domestic mastervalence and autonomously Thelweapons. Then we get into the fall of 2025, and as I understand it, a Department of War at the now official named Emil Michael is confirmed by the Senate. He had not been confirmed when this contract was renewed

in 2025 or in the summer of 2025. He's confirmed in the fall. He comes in, he reviews the contract, he sees these usage restrictions, and makes the decision to decide

That the Department of War cannot live with these restrictions

and says we have to have all lawful use only.

So he approaches anthropic, and it's worth noting, anthropic is the only LLM that is available to be used on classified systems. He approaches anthropic, says we need to renegotiate for all lawful use, anthropic agrees to drop

many of their usage restrictions, but not those two. That ends up being a red line for anthropic. Department of War then says if you don't, this goes on for months. And eventually this escalates to the point,

and I think there's probably a lot of personal conflict,

and a lot of back and forth drama here that's mostly private. But we eventually get to the point where the Department of War says if you don't agree to drop these red lines that allow us to use AI

for all lawful uses, then we will designate

your company andthropic a supply chain risk, which will mean that all of your Department of War contracts are canceled, but more importantly, so are all of your contracts with any Department of War contractors, right?

So like, for example, Microsoft is a Department of War contractor, and they wouldn't be able to use anthropic AI services in their fulfillment of contracts that they do for the Department of War. And that gets announced at this point about two weeks ago

is when that initially gets threatened, and then the actual designation came down something like a week ago, something like that. The timeline is now fuzzy for me because it's been a very busy couple of weeks.

And now we're essentially in court andthropic has sued the government in the ninth district of California, and are the northern district of California in my apologies. And that's kind of where we are.

- Just to clarify one important legal slash verbal issue here, many Americans would not be comfortable with the Department of War doing master valence. There might be situations where that was accepted, acceptable, what is the definition of master valence

with the federal government have to get a court order to do certain kinds of surveillance. What the Department of War was asking for, if I understand it correctly, is master valence that's called legal.

They wanted, quote, "all legal use." And that could include master valence is defined by people in everyday language. It could include autonomous lethal weapons that have been approved in some legal fashion,

but anthropic wanted to draw, it seems to me a verbal distinction there. They wanted to freedom in their contract to say, this is the use of our technology that we don't approve of, even if it's legal.

Is that a correct summary of their position? - That is correct, yes. And so I think specifically, when it comes to domestic master valence,

I think that's the complex sticking point here.

So just as an example, when there are a very large number of commercially available datasets that would include information on Americans that could be private or sensitive,

but that are commercially available. So things like smartphone location data. For example, many people,

you might download a third-party weather app to your phone.

A lot of times, the weather app needs to know the location all the time to give you the weather and whatever you happen to be physically in the world. And so a lot of the ways that these weather apps make money is the users turn on location,

and then they have a location tracker and they sell the location data. This is very common, right? And so there's tons of things like that. There is obviously also like commercial satellite data

that you can buy. There's web usage data, just a very, and you can not only can you buy these individual data sets, but you can combine them in all sorts of ways to generate quite rich insights on individual people.

And this has been true for a long time, right? This is sort of the era of web-scale data, right? The binding constraint, though, is that on the use of this data is simply that it's time intensive

to actually analyze for any individual person.

So you have to do this for like high value targets.

It's not illegal, in many ways, in many domains of national security law, what I've just described is not illegal to do. It's not considered surveillance. If it's purchased, if it's commercially available data,

it's not considered surveillance. So once you have advanced AI systems,

Which can scale human expert like attention

infinitely, essentially,

it is all of a sudden, as though the intelligence community

has instead of thousands of analysts, millions,

intensive millions of analysts. And so you have a workforce of analysts larger than the government itself, a larger than the human workforce of the government itself, I should say.

And anthropic position is essentially that, and I agree with them here, that the law is not sufficient. The law has not been updated for this reality, because this is the reality only of the last few years.

And the law is not updated for it. And so yes, the basic domestic master valence as a legal term, as a legal term of art, does not correspond with what you and I might think of as the vernacular definition of the term domestic master valence.

- Okay, so let's now turn to what's at stake here.

And again, we're taping this in mid-March of 2026.

It'll come out in about a month or so by that time, you know, maybe all humans will be eliminated by AI or of the department of who knows. So, listeners, be aware that this is a rare e-contact conversation that's very, fairly timely, and things could change

by the time this airs and keep that in mind as to when it was taken recorded. So, what's at stake here? - You had a very strong reaction to this, there's a little foot down, by the way,

we should just mention, after this disagreement between anthropic and the department of war, the department of war, if I understand correctly, major agreement with open AI with very similar terms without the constraints.

Is that correct? - Yeah, so there was at least there's an agreement in principle, it seems, for open AI to models to be used in classified settings, that I would say don't contain the same red line protections

that anthropic sought from the government, but do contain open AI is essentially hanging its hat on the notion of technical safeguards. So, instead of putting these safeguards into the contract you, their view is we can train a model

and build a system, and if we control the deployment of the system to the department of war, then that system could, for example, reason in real time, about whether or not what it's asked to be in due, to, you know, it's what it's being asked to do

is domestic mass surveillance and say no to the government. - Okay. - That would be the idea. - Well, we'll see, so why is this, how, you found this alarming?

This, basically, the actions of the department of war,

why? - Well, a number of reasons.

I think the first is the nature of the punishment.

One thing I think that's worth being clear about is there's this whole notion of all lawful use, is I've talked to defense procurement and procurement law experts. It's, this is an abnormal notion in contracting, right?

It's sort of question-baking, right? And maybe the vernacular is opposed to the literal sense of that term, but it's like, well, what is lawful? What does lawful mean?

Who decides? And in this case, it's, well, the Trump administration is saying, we decide what lawful is and we'll do it until court stops us or someone stops us.

And so, it's, it's a somewhat strange term of art. I get the principle of the principle sounds very intuitive and I'm actually just willing to concede for the purposes at least of this debate that like, it's perfectly reasonable to say

we want all lawful use. I actually think it's kind of complicated and strange to say that, but there's reasons that most, like, a contract for a missile does not say you can use the missile for all lawful use. That's not what it says.

And like, the Department of Wars position here is they're sort of pretending like that is what, you know, what the contract should like, but it's really not. So, but setting that aside, the bigger issue here for me is the nature of the both threatened

and realized punishments that have been dulled out on anthropics.

So, first of all, Secretary of War Pete Heggseth threatened

To issue regulations that would make it such

that no DOD contract or Department of War contractor

could do any business with anthropic, which is very different from saying, no Department of War contractor can do business with,

can use anthropic in the fulfillment of DOW contracts, right?

Two very different things. One is profoundly broader than the others. So, he threatened the any commercial relations and what they actually followed through with in terms of the regulation that's been issued so far

is just, it's just barring Department of War contractors from using Claude in their fulfillment of Department of War contracts. And then still use Claude for other things. - That's the supply chain risk.

- Yes, this is the supply chain risk designation. So, to be clear, Microsoft in Washington State,

and its offices can use Claude all they want,

except when they're working on a particular contract with the Department of Defense. - Yes, Department of War. - Yes, it is, it's a little bit complicated because the Department of War does, you know,

one thing that subject to a Department of War contract would be like Microsoft Windows, right?

They buy lots of computers that run Windows,

they buy lots of computers that run Microsoft Word. - It's a great, yeah, it's kind of great. - Yeah, and I mean, one way to think about this too, though, like even if it is the more narrow definition. Actually, Microsoft is a good example.

Let's say in the '90s, in the early 1990s, that the Department of Defense had issued a supply chain risk designation against Microsoft for Microsoft Windows and said we won't use it, and none of our contractors can use it

in their fulfillment of Department of Defense contracts. - One wonders, would Microsoft be the sort of world be striding company that it is today? I don't know, so we are talking about something, even in this narrow usage of the regulatory authority,

we're talking about a government intervention in an emerging technology that has the potential to really like radically reshape the trajectory of this industry and one company within it. - And it's a background, I don't really wanna go into this

'cause it's not that interesting, but it should be mentioned that people have speculated that anthropic having us an allegedly more safety-oriented culture in its development of AI and possibly a training process that has certain processes

that people have said it's more hate to use the word "woke" than the other AI companies, that there's something else going on here behind the scenes that's not thing to do with Redline. And I just, you can comment on that if you want,

but I wanna, we should just mention that.

- Well, yeah, I think that is worth mentioning,

but I'll just say like stepping back a little, like this supply chain risk designation is only used, typically is only used against companies from foreign adversaries. This is about like adversary manipulation of American military

system. So it's really treating anthropic like enemies of the state, essentially. - And the broader designation, which would have been, you can't use it for,

that any company that does anything with the Department of Work can't use it at all, anywhere, would be kind of like a terrorist organization. Or as you say, a foreign enemy that you would say, you were embargoing or putting some kind of sanctions on.

- It's the equivalent of, it would have been the equivalent of sanctions. And one other thing that I think is worth noting here is that this is clearly act one scene one. If the government, if the administration decides

that they wanna bring the entire federal regulatory apparatus to bear against anthropic, I imagine they will. And I also think, by the way, this doesn't have to be restricted to formal, legible, regulatory action, right?

This can be job-owning, in fact, anthropic has alleged in their complaint against the government. They have alleged already that the government is calling anthropic customers. government officials are calling anthropic customers

and encouraging them to cease doing business with anthropic. So that is job-owning, that is soft, that it's very hard to sue about. So all this is like, this is essentially, I mean, if I were to put it, if I were to summarize it,

in just a sentence, I would say, the government is saying here that if you don't do business on terms, on terms, we unilaterally set, we'll set out to destroy your company, which is a kind of user patient of private property.

And even more to your point, Russ, about some of the political,

basically, every time senior Trump administration officials

Have invoked anthropic and talked about

the supply chain risk designation, they have inevitably mentioned that anthropic is liberal, that they're supposedly woke.

I think that's not exactly true, actually,

but that they're supposedly woke and they don't share Trump administration political values. That part certainly is true. Anthropic is run by people who donate to Democrats. A lot of AI companies are, it's worth noting.

But so, and if that's the case, if that really does, then this is also a form of political interference, which would be, you know, in addition to private property, user patient, would also be a pretty serious

arrangement of first amendment rights.

- Yes, I think the question is, you know, you framed it in a particular way. It could be framed a different way. It could be framed as how can we allow a private company to interfere with the security of the citizens

of the United States. I mean, the Department of War is responsible for keeping American safe. Would the argument would go? And if we need to do certain things,

we the government, of course, the particular private company shouldn't be able to dictate the national security scope of the actions of the Department of War. That would be the other side.

We'll come to that, but before we do, I want to go a little, I want to restate and make clear what you just said.

You're basically saying that the Trump administration

forget this thing about user privacy. It'll probably be a property in first amendment rights. That sounds nice. But let's make it starker. Do we really want the federal government

punishing and rewarding particular companies for any reason? In this case, it might be political antagonism, that would be her particular horrific. But in general, in a free market, so-called capital

a system, how do you draw the line between private companies and government power?

And that is really what's at stake here, I think.

Yes. And one thing that I think should be really clearly said here is that one of the reasons that it's very hard, and this is not just true of American companies. True internationally.

It's very hard to do business with the Chinese, with large Chinese tech companies, because it's sort of known that in particular things like information technology, there's a reason that Chinese companies don't make the operating systems

that define computers all over the world. And it's because of one of the reasons is that it's a lot of reasons, but one of them is that everyone knows that Chinese technology companies are assets of the military, and are viewed that way by the government.

And that's not the case in the US. And that has aided American companies in doing business abroad, because there is a trust.

One of the things I actually used to always say

when I was in government to foreign governments, who maybe they would have some concerns about doing business with America, right? Oh, you're an unreliable business partner. And I'd say, look, yeah, I can't deny to you

that the government changes every four years here in America and there are these wild swings in different directions. And I can't deny that to you. But the thing is, is that don't think of yourself as doing business with the US government.

Think of yourself as doing business, but Microsoft, which is way more stable and has totally legible incentives. The problem is that when you do things like this, you are eroding that distinction between public and private,

which gives people faith in Microsoft.

Microsoft is a higher credit rating than the US government, right?

It gives people faith in the institution of Microsoft that is separate and apart from faith in the institutions of the federal government. And you erode that and all of a sudden everything becomes political, and that's a subsuming mentality

that I think is quite toxic. But equally important, and that's interesting, and it's not a relevant, but it seems to me it's much more important that, as you say, we're on the very earliest days of this extraordinary technology,

and the government's picking winners and losers, not based on who has the best technology, but without any particular constraints, not constitutional constraints, it could be political, I don't know, who knows what's really the hearts of human beings,

but it could be political. And if it's not political, it's arbitrary, it could be corrupt, it could be personal, it could there are thousands of motivations. And in general, we would want government to not be beholden

to those kind of motives and to leave private companies

To do what they do best.

Having said that, and I'll let you respond to that, too, if you want,

but this is a unique technology on the surface, on the surface. It is probably gonna revolutionize the world. We don't know for sure, it's certainly revolutionized if you industries are ready in the last year. And we're kind of worried.

Many people are about our ability to keep a lead in this technology relative to our potential enemies abroad. So there's a national security issue here, that works in the opposite direction, which is we want we Americans, Americans want,

anthropic, open AI, Google, the three big leaders right now, there may be others coming down the road, to be able to be at the forefront of this, and if we're gonna punish them by saying, we don't like you, we don't like to eat in play ball with us.

We think this is really important, and you didn't cooperate. You're gonna hamper the competition that's producing the six-storeed dairy set of technologies.

- Well, first of all, I think it's worth noting,

yes, there's a picking of winners and losers here, and it is explicitly not merit-based, because Secretary Hugseth has said that the reason we use Claude, I'm paraphrasing him here, but it's because it's the best,

and the reason that this is so important to us, the reason that this fight is so important to them, he said is because it's the best, and yet at the same time, his regulatory actions are trying to drive the company,

at least hurt them if not drive them out of business. And yeah, it's also worth observing here that this is an incredibly capital intensive industry, and all of this regulatory risk is making it much harder for anthropic,

in particular, and probably the industry in general, to raise the capital that they need. And so, yeah, I mean, you are diminishing America's ability to maintain its lead in this technology,

right at a critical time.

So, and not to mention the fact that, by all accounts, Claude is exceptionally useful, ready, and it's still relatively nascent forms, is already exceptionally useful for certain kinds of military operations.

And so, I think it's unambiguous to say that if Claude disappeared from military systems tomorrow, it would be a American national security would be weaker. - So what's the other side of this argument? Can you still man the other side

to people who think that anthropic was out of line?

So here's the other side, I'm not gonna give the argument, I'll let you give the argument, 'cause you know it better than I do. Anthropic's out of line here.

This is a national security issue. They should have deferred to this application, they should have said to this contractual demand, they should have said, of course you could use it for everything that's legal,

and we have our own feelings about surveillance, and autonomous weapons, but we have to trust our government to do what's legal. So as long as it's legal, sure, go ahead. And how dare they, how dare they hamstrang

the national security interests of the United States because they have a different view of what's legal, perhaps? What's the argument there? - I think the argument is that,

yeah, this, Anthropic is essentially, using its private power to set what amounts to public policy, you know, laterally.

And there is some truth to that, I think.

I don't think that's crazy. And my own view is that, look, on one level, we look at this now, and it feels really restrictive. At the same time, the government purchases software,

including software that's used

in really important critical applications,

purchases software on commercial terms all the time. And commercial terms of service are the same ones that you purchased under, right, basically. And so commercial terms of service often have usage restrictions.

The government, government software contracts have all kinds of usage. - If you don't like it, don't buy it. That would be the argument. - That's what they use.

When I complain about some uses restriction on some product, you can't take the back off, you void your warranty, whatever it is. You just say, well, if you don't like that, don't buy it. Buy something else.

- Yeah, right. And AI is in fact a competitive market. It's true that anthropic is the only model on classified systems right now, but that's not a fact of physics, right?

That can change. And so, but I mean, I think to make their argument for them, I think it would be no, like we don't, it doesn't matter about competition.

If a private party can't do public policy

through contracting. - Yeah.

- And, you know, and it's just that simple.

And like we, and also, you know, there are some allegations that the government has made that anthropic has done things like threatened to remove cloud,

like basically to poll cloud services during active military

operations. If anthropic doesn't like what the government is doing, I must be honest with you that I have some real questions about the veracity of those claims, but at the end of the day,

'cause I will say, it doesn't sound like a thing that you would say to the government. It doesn't sound true, but it's what the government claims. I'll be interested to see if they claim these things under Earth. - Yeah, we'll see.

- That's the ultimate thing, to the DOJ lawyers claim it under Earth. - So, what's fascinating about this, it could be merely in a different world that a permanent world

we're using cloud to, as you say,

in the beginning we were saying it, maybe to streamline their HR. To make their office work a little more efficiently. And this could have come up. They could be unhappy about the way that works

and they could have complained and they could have tried to redo their contract. They could have threatened them. There's a lot of things the government can do if they want.

And we'll talk a minute about the other constraints besides what they want. But this is a very complicated piece of technology because it does have important military applications and it has immense number of non-military applications.

Some people have likened it to a nuclear weapon. They've said, if a private company developed a nuclear weapon and sold it to the government because it was better than the nuclear weapon

the government had, sort of, but useful story, I think.

Certainly they would not be free to withhold the weapons were had because the company felt that the causes belly, whatever, was the cause of war that was generating the use of the weapon, they didn't agree with it.

And that's a dramatic way to make your point about a private company doing public policy. So is that a legitimate analogy in this situation? Well, I think that the contractual analogy actually is fair. And in fact, you can imagine even a version of--

you can imagine anthropic having a contractual term that says, we are only comfortable with our models being used in wars declared by Congress or something, right? Exactly. And of course, there's a long history of America engaging

in basically wars that aren't technically wars.

So I think the nuclear weapons to AI analogy is actually quite poor for reasons that I would be happy to explain, but that's not actually your point here. Your point is more about this contractual term. And I think the government has a very fair point here.

My observation is too full. You can make that point without trying to destroy anthropic business number one. Number two, you, like, but I think on the anthropic side of things, if these protections matter so much

to the leadership of anthropic, if they matter so much that they're willing to call these red lines against a government that is threatening to basically destroy

their business, I think if they're that important,

then you should have just said we're not selling you anything until there's a law. And they should have said that in 2024. In fact, like, if they were in such cohoots with the Biden administration and the Democrats,

they should have said it in the summer of 2024. They should have said, no, we're not going to do this until Congress passes the law about domestic surveillance and autonomously the weapons. And we want those protections written in statute.

- I just want to make an observation here. I don't know how important it is, but yeah, and I'd states is kind of weird about this generally. It's weird in healthcare, right? In healthcare, we have people that sometimes claim

we have a free market system in healthcare. And what they mean by that is you can be a doctor if you want and have a private practice. We don't have a free market system in healthcare. We have an incredibly government tampering role

in a healthcare market that is not anything like a free market. There's control the number of doctors

Through a certification of medical schools,

accreditation of medical schools, licensing of positions.

There's incredible subsidies through Medicare

and Medicaid that basically run to determine

what the prices are, they're not free market prices. So people get confused because the United States system is very different in it because of our culture and our heritage as a sort of free market country, we allow certain private activities to take place

that give the illusion of a private market when it's not one at all. As opposed to say, the National Health Service in Great Britain or the Canadian healthcare system where doctors generally are employees of the government.

Now, we do the same thing in defense, right? We have private defense, we have public government defense activity, like the Los Alamos Project, that was not a private company taking venture capital money

to develop a nuclear weapon to fight World War II. That was a government project. But there are many, many, many private companies that develop things for the government. They're nominally private,

but their business is so dominated by federal contracting that there were this weird hybrid, like the healthcare market. So a company like Boeing or McDonald Douglas, they are private, they have private employees.

They're not federal employees, but they have this weird relationship with the federal government. They are dependent on federal contracting and in a way that a foreign supplier

that's a nationalized, effectively a nationalized industry is different. So here we have this technology that is not a military technology on the surface. It's a general technology,

but it has this very strong and powerful military potential.

And so what we're seeing to some extent is the unusual nature of a company that is clearly private, but it has a very important role to play in public sector activity in particular national security. And if it were only good for that,

I think we'd be having a very different conversation.

Part of the complication, this is, it's good for seemingly everything. - So your question gets, I think, to one of the most interesting dynamics that we're going to face in the next decade,

two decades, maybe more, which is what is the relationship between this thing we know today of as the frontier lab and which is the AI companies and the U.S. government and the federal government.

And it's an incredibly complicated question because number one, there are national security implications, right? These technologies are object level, can be used for object level dangerous things, right? They can be used to engage in autonomous cyber attacks.

So in other words, I don't need to have a military arsenal to make use of these models, or an intelligence gathering apparatus. You can, anyone can launch a cyber attack, right?

So there are these things, right? There are people who talk about things like bioweapons and whatnot. There's all sorts of catastrophic, potential dangers, misuses, malicious uses of the technology.

Obviously, there's a government role in the sort of mitigation of those things. Well, maybe not obviously,

but I think that there is some government role

in the mitigation of those things. But it's also an incredibly useful technology, four nationals of the area, like four government, you know, four militaries, specifically and uniquely.

And then it's also a technology that I think will be a profound part of how all of us exercise our individual liberty and express ourselves in the future. And even today, it will be hugely important,

sort of foundational tool in the acquisition of knowledge,

which is a first amendment, right, in and of itself,

but also the self expression for many people, I think. And then on top of all that, I think that we're dealing with the technology that like the printing press may well be so foundational to sort of the capability of organizations

and institutions that it actually changes sort of the institutional complex that defines the technocratic nation state. So it's such that what we currently think of as the government

Will actually change in important ways.

And so in that sense, you know, you might think

that the technology, the frontier labs are developing

is in some ways a challenge to the institutional status quo in which sort of technocratic regulators

are in charge of large swaths of the economy, basically,

that that in and of itself might be challenged in various ways. And so it's all of these things all at the same time. And so, you know, I can't say that I know exactly what the answers are going to be here because indeed, like, I approached these issues

with a kind of classical liberal frame. But I'm also aware that the very notion of classical liberalism, some people would argue it's already an agronistic, and certainly you could say that if you look at the, if you think about the future

that maybe all of our political concepts, all of our political theoretic concepts are going to be somewhat outdated because something new. There's something, some new type of institutional complex beyond the technocratic nation state

is going to emerge. And so a new sorts of political relationships will undergird that. And so I think it's a good, I think classical liberalism is a good starting point.

And all I can say is I got, I changed my career from what I was doing before to be writing about this

because basically this question in particular

is one that I find infinitely fascinating. I'm extremely important. And I don't have all the answers. But I don't have anything like all the answers, but I do think that, you know, this is,

this is going to keep coming back to us. I think many times. - Now, I think the point you're, as I say, highlights, government regulation historically is about either restraining the power of the private sector

or enhancing it artificially through what economists call renseeking.

If you want to take a less charitable motive

for government regulation. These two things, they're not mutually exclusive. They're just a little both often and all much of what government does. But that's the way it works.

There's a political process. Government regulates some things, restrict some things, sometimes that benefits the public at large, sometimes it benefits individual players. That's a better way to say it on the corporate side.

And we're in a brand new, brave new world right now, where the idea of what ideal regulation is and what is the right role for the federal government in this nascent industry is unclear. It's like you, I start with a classical liberal framework,

but it's not exactly how to apply it here. And you can hear that in some of our conversations so far in this, in our, back and forth, which is, you know, what's it mean exactly? It's an unusual, it's not the printing press.

It's not electricity. It's not the steam engine. It is something that might underlie a total transformation of work and play, in which case, government probably isn't prepared for that.

I know most of us aren't either. And so the question of what should be the appropriate role in this brave new world for the government

is up for a very crucial conversation.

And you're, but I hear from you as you want to be a part of that conversation. And I applaud you for it. And the other thing I hear from you is that the heavy-handed approach to the department of war is taken in this early development

of what is the appropriate relationship between the federal government and what is right now the private sector does not seem to be ideal in consistent with traditional American values of private property, freedom of expression.

And I would also say responsibility and in the incentives. And whatever restrains this technology, it probably shouldn't be the whims of a particular person in the department of war.

That's where I would put it. - Yes, I think that's right. And the thing here that's like hard for people,

I think is, there's this notion of aligned superintelligence, right?

That we're going to make something that is smarter, vastly smarter than the best human experts at everything, right? And at every cognitive task. And I don't know if that's actually what we're going to build. Exactly.

I don't know if that's a quite the right way of thinking about it. Yeah, but grant for a moment that it will be

Of foundational importance to everything

that an organization like the department of war does,

or a very large number of the activities that they engage in. And also that it may be capable, in fact, definitely in order to be what is described as, or like what the companies are trying to build, it will have to be able to act in the world

as kind of its own. It's not a pure legal agent that does whatever you say. It will have to be able to make decisions. Again, we have anthropomorphizing languages complicated here, but we'll have to, we're taking our hands off the wheel

to a certain extent. And so I guess what I would say is imagine a world in which we build something that is smarter than all the employees of the department of war. And when we ask, what is domestic master balance?

It's like, well, what will it do and what will it not do?

And the answer is well, the machine will decide, right?

That's obviously a caricatured world. I don't think it will be that simple. But probably that element of the machine deciding, like truly deciding something. That's probably something that a lot of people have not

emotionally and intellectually factored into their models of the future that you like probably ought to, at this point. Yeah, I don't just say one thing about that. And then I want to segue into the deeper questions that you raise the beginning and end of your piece.

And that statement, it'll be smarter than any employee of the Department of War is the somewhat misleading statement because many of the things we care deeply about are not a question of cognition. And I know that's not fashionable to say.

So let me try to make you clear what I mean. I can imagine the Secretary of the Department of War late at night frustrated that this company has failed to do what he wants. Turns to Claude and says, you know, Claude,

this really annoys me, what can I do to get my way?

How can I get anthropic to bend to my will? And Claude doodifully would say, perhaps. Oh, we should threaten them with supply chain risk. You could even do more than that. The designation of supply chain risk.

You could make them essentially corporation on grada with anybody who deals with the Department of Defense. And it could come up with some things that the Secretary can't think of. And that's the sense in which its cognition

is spectacularly great. But what it cannot do.

And I believe we'll never be able to do.

And I even think it's meaningless to say it this way. It will never be able to give the Secretary of the Department of War advice or whether it's the right thing to do. It's not a meaningful question. There's no answer to that question.

There's no-- it's not a question of coding. It's not a question of how many calculations you make per second. It's not even a question of how many philosophers you've read in the history of your life.

It's not that kind of question. And people, I think, assume that all questions will ultimately be questions you can answer. And I believe that is not true. I believe there are no solutions only trade-offs.

And what's from the world of trade-offs? That's not something a machine can decide. It can try. It can give us some sort of utilitarian calculation if you're utilitarian, I'm not.

So this idea that in theory, we would-- so I think the risk, one of the biggest risks of AI, is people thinking it's good at answering the wrong kind of question and using-- you still use it. It will give you an answer.

If you ask it, should I do this? Well, unless it's been trained to say now,

it will probably give you advice about whether you should do it.

I've already done that with some nice statistician decision making here at the College of Estates opinion. I've asked it why it thinks that. Why is it justified that? But that's an illusion.

And I don't worry about making the wrong decision. I worry about people assuming that whatever it says is the right decision. And that-- and giving it questions the answer, it is not capable of answering.

I agree with you in part and disagree in other areas. So I think like the other day, actually, I was using GPT 5.4, the newest model from OpenAI. And I was asking you about a very complex-- a private issue, but related to some of the things

we're talking about. It's a very complex interpersonal and professional thing. I'm dealing with.

I was kind of like, OK, here's

what I'm thinking about saying in this situation,

like, what do you think? And it responded to me. And it actually said what I should have said. It was like, no, you shouldn't say that. You should say this.

And I was like, wow, that's really like-- because it knows enough about me to know what I should sound, what I want to sound like, right? It knows what I sound like at my best, in some sense.

And so what I do think, though, what I think is--

so I'm not sure that I agree with you that it won't be able to reason about trade-offs and moral and ethical things. In fact, I think Claude is a better-- I'd be willing to bet you that if I had a moral and ethical question

for Secretary Hegsev versus Claude Opus 4.6,

I bet you nine times out of 10, I would prefer Claude's answer

to maybe hotspore. No comment. Go ahead, carry on. And-- But that's--

Other than to say, that probably tells you what you think of Pete than what you think of Claude. But go ahead. It's like, right, well, that's-- that's interesting because that's not true of you.

Right, David? You know, I don't think so. I don't think so. I bet you sometimes I like Claude more than what you would say, but I bet you not every time.

And so what I do think is that like-- I agree with you that there's a risk to just assuming the AI is right about everything, because it's actually not, especially in things like this. But also where I think the value of--

where I think the human touch is going is really going to be on these things that are definitely based on relationships, based on things like trust and integrity and charisma and persuasion, which--

and politics to so much that. Right, it's like the notion of automating politics doesn't really make sense to me. It seems like that seems like a category. And the reason for that is not that AI can't do a better speech

that it can't perform the--

I think AI can probably perform many of the sort of speech

acts of politics better than the best-- and I'm willing to submit one day, the best-- it'll be better than those things. And even like strategy and stuff, better than all of on Bismarck, better at rhetoric

than Abraham Lincoln, better at writing rhetoric, at least than Abraham Lincoln. But there's this issue of politics is an inherently relational act. And that seemed much harder to automate.

And so that's my guess is to where we're going. That's where I think the human touch is going to be. That's a super different world than the one we currently live in. And I don't think our education system is prepared. Maybe yours.

But not the US education system is preparing students to live in that world. That's a very different world than the one we're used to. Yeah, fair enough. I want to close--

and maybe we should have opened with this. I hope listeners have found this interesting. I have about this to me, what we're going to talk about next is in some ways the most interesting part of your piece. It's also the least specific.

So I'm just-- I've saved it for last. And you start your piece this piece quad with an AWD at the end. You start the piece with the discussion of your father. Talk about why you did that and why that's relevant for this moment in American history.

So I have come to a quite biological conception of institutions.

I think institutions are made up of human beings

and I think that nature is filled with fractals. And so I think that while institutions aren't exactly like human beings, there are ways of observing and thinking about living things that can also be usedfully and productively applied to institutions,

both as an analytic matter and for purposes of the poetry of it all. And I don't think there's that much of a distinction between those two things, actually. But like-- so I opened up the piece

basically describing the experience of sitting

at my father's deathbed about 11 years ago. I was 22 years old. I just started my career. And I was-- it was no secret. We were in hospice.

It was me and my mother and a few other family members. And we knew that we were watching my father die. And I remember reflecting at the time and I've reflected, of course, on that experience many times, since that death is this process.

And that in some ways, my father had become sick. He had gotten heart surgery that went wrong six months

Prior to the date that he died, roughly.

And it was immediately after that six months,

he was a changed man. Entirely, the life had kind of been sucked out of him. But you know, and then it was just this gradual process of just him sort of becoming less and less there. And it fits in starts, not even necessarily.

But he would occasionally come back and have some life in him. And then the actual process of just watching him die, I realized that like, you know, I don't know. He seemed, he seemed dead to me, well before the machine

declared him dead. And so the machine making this declaration that his heart had stopped, you know, or the faint signal that it was getting from the heart was had crossed this point of faintness

that, you know, the machine made some arbitrary decision

basically that, you know, that he had officially passed over.

That is just one, I think, one way of looking at, you know,

where he was in the process of death. And so I was reflecting on that. And reflecting on, you know, why is this experience of writing about anthropic department of war? Why is it so emotional for me?

You know, why was it so frustrating? Why do I feel such a deep melancholy about it? And what I realized is that it is this, it is because I just feel as though I've watched, throughout my lifetime for 20 years.

I've watched a lot of these bedrock principles of our republic get eroded and thing after thing. And it's just like, it's been the same sort of corrosiveness

but worse sequentially every year, it feels like.

And I suddenly realized it clicked for me that that process feels very much like death. It felt very much like the experience, I don't know what death feels like, but it felt very much like the experience

of watching my father die. And, you know, and also the fact that like, I think about this a lot privately, but I don't talk about it that much. And the reason I don't talk about it

is that it feels quite painful to talk about. And when my father was going through his six months of dying, we talked about his health a lot, but we didn't talk about sort of this certainty of his death that much and where he was in the process

and all these kinds of things. 'Cause it was too painful and we knew the answer, the answer we all knew. And so that's, yeah, that's kind of why I start, I mean, I will say that I wrote that piece in about two hours.

So it just kind of came out of me.

- Well, the reason I think it's so profound,

I'm older than you, I've been watching for more than you have. And it's been clear to me for a while and listeners know this 'cause this shows 20 years old as of next week. And over that 20 years, listeners can hear my optimism

about the American experiment. And then sometimes my pessimism at the time, as I said, we're near a civil war, America is near a civil war. And five years ago, I moved to Israel and I found myself watching America from afar.

And it changed my perspective. It allowed me to be a little more of an observer unless of a participant in some dementia, so American said to somebody. And I thought for a long time now, something's wrong.

In fact, something's wrong in the West. It's not an American problem. It's a Western problem. And what your piece may be realized is that it's possible that this problem is not gonna get better.

That's what's hard to say. That's the melancholy for me.

And I think there's a tremendous blindness

among some Americans that this is a Trump problem. Trump is just the manifestation, the latest manifestation of a very, very long trend. It's probably, you could say it's, you could argue it's 80 years old,

it goes back 90 years to Roosevelt. You could argue it goes back 60 years to Lynda Johnson. But what is that trend? The trend is the end of the Constitution as an effective constraint on government power,

the rise of discretionary action, the destruction of norms that put some things off limits, or no longer off limits, those arms are gone. And as a result, it's much more, what's expedient?

It's not what's constitutional,

it's not what's principled, it's what can I get away with?

And you could argue that the Department of War threatening a killer company is not that important. It's a spaghetti dispute between egotistical players about their own success and failure. And, but what I thought you struck at deeply,

and maybe we're overracting here, but I think not,

is that you don't know what you got till it's gone. And we thought we had a republic. You know, there's this very famous line from the Constitution of Shove Convention, and I think 1789, where someone asks,

"I'm going to get this wrong, so forgive me." You guys all all fix it for me, but take somebody as Benjamin Franklin. Well, kind of government do we have, and he responds, "But Republic, if you can keep it."

And America kept it for a very, very, very long time,

it's had a tremendous run. But the increase in executive power, unconstrained by the Constitution, unconstrained by norms, is a long trend. Trump is just the one most comfortable ignoring the things

that other people used to not ignore. They're all but ignoring it to some extent, the last eight presidents or whatever the number is. And I think this whole debate about whether it's, we're heading toward fascism.

I think that's the wrong way to think about it.

I think, totally. What we're talking about here is the slow, inevitable erosion of institutions, as we get farther and farther and further away from our founding and from the principles,

this has stained it. And now it's like other places, you know, if you get a good president, it turns out, well, if you get a bad one, it doesn't. It used to be a wasn't so important.

All of a sudden, it's really important. And the reason I think your piece is so insightful is that when you're in the middle of it, you don't notice that it's like the frog getting boiled. And just, it's a warm right here.

I don't know, it's he's a little warmer. But after a few decades, it's like, this water's boiling hot, it used to be cold. And you kind of start to notice. And what you've done, I think in this piece,

even though it's a small corner, but maybe not, is to point out that the water's been boiling for a while. It keeps getting warmer and warmer and it's an illusion that we can turn it down. It's just, we're going to have live in a new world.

And I think you're right. And it helps me, it's a very, I'm sorry about your dad.

It's a very powerful metaphor for thinking about change.

Not so much about death, but this is how it's to be about death, but we're going to kind of change when you're in the middle of it. It feels like, I don't know, is it really changing? Maybe it's just me, maybe it's this one example, maybe it's this particular Congress that doesn't want to do

quote, it's job, all of a sudden, this goes back also to things you've all event is set on this program. Everybody's performing, they're not doing what happened to a world where people did what they're obligated to do, what they're responsible for doing, they're duty.

And then you think, well, we just need a president to come along who's going to do that. Do you really think that the next president, Republican or Democrat, is going to be any different? It's just going to happen.

I think it's just going to be the same thing. So that's my rant. You can read your rant beautifully, said, you can go read your piece, and I'd like you to reprise it now if you want,

but react to what I just said. Yeah, I mean, no, I think it's very, it's very well put in some ways more precisely than I communicated it. And I think the way I think about this is, you are definitely right that this is about change and not death.

Because I also talk about the birth of my son, briefly in that piece and how it is similar. And how, my experience thus far quite brief stills, only several months of being a father, is that, I sort of just am watching my son progressively

awakened, he just becomes more and more aware of the world. And nature is like this. Nature is filled with phase transitions. There's a great graphic, I saw on social media, on Twitter the other day, of sort of a heart beginning to beat.

And like what that looks like. And it's all these decentralized cells that begin to activate, and then enough of them activate, and all of a sudden you have a heart beating. But it's not like there's ever one moment where,

you know, it is, and by the way,

I think that change from AI will be like this too, right?

There will be phase transitions. There already have been phase transitions in the progression of AI. And there'll be in the adoption as well. So very much, yes, and part of the point I'm making

Is like, yeah, I'm not trying to make a point about fascism,

and I think probably a lot of people

on the left read my piece. And I took pains to say that this wasn't just about Trump, but I'm sure a lot of people, and I knew this would happen.

A lot of people on the left, I think, read my piece

and sort of in self-satisfied fashion said, oh, yes, but everything will be solved when we get gap in new cement in there whoever in a few years. And like that's very much not my view. My view is like, the most charitable thing I could say

about the left would be that they would likely or do everything, they would likely or do all the same stuff in a somewhat more gentlemanly, technocratic fashion. Then the Trump administration, which has a tendency to be really explicit and stumble into things like this.

But in some sense, I actually applaud the Trump administration for that, because at least it's out in the open, at least we can talk about it with the Trump administration. And the one other point I would make is I spent more time debating whether or not I should publish this piece

in the form that I published it, then I did writing it. Because there's a certain aspect of like, there's run on the bank dynamics that you don't want to contribute to with things like this, where the reason that republics work

is that we all believe in the common fiction

of the republic, and that's always been true, right?

That's always been true. But I certainly did get pushed back from some people, including people that you and I both respect about the decision to publish it. And one of the things that I heard is, well,

democratic elections are still functioning, right? We're still, we still do have elections and the results of them are observed. My view on that is that that is that is goal post-moving. In my view, because it's better than nothing, but it's better

than nothing, but it's really, and the thing is it's really easy to observe, like it's really easy to observe. Did I go to my polling place in vote and did the person who won get into power? And so it's very, very hard to erode that particular thing.

And it's interesting to me that even the left has chosen to focus so much on this issue of like the erosion

of democracy per se, because that is always

seem to me like the Trump administration or anyone else

is least likely to mess with, because it's so verifiable. And instead, and indeed, the founding fathers, if you told them that the one thing that persisted was the ability of the masses to vote-- So it would be a fault.

So depressed. They've like, that is the worst part of the whole system. Yeah, I forget you said it. But maybe it's just some general bit of humor. But the joke used to be about Mexico

that the same party won ever election forever. Ever get the name of it. And the claim was that Mexico had a democracy 360-4 days a year, and the 360-5th day

when they did never democracy was election day,

because it was rigged. But the rest of the year, political forces did matter. The people did have influence. So but not on who won the election. I was-- that was rigged.

Yeah. I mean, because it's tyranny of the masses, right? The democracy is just the tyranny. The idea that there's an omnipariful and omnipotent executive who just shifted that--

we shipped wildly between two different omnipotent executives based on a democratic vote. That's like not at all what a republic is. And so the fact that elections are being observed is like not--

does it feel-- it's called comfort. Yeah, it's called comfort. Before October 7th, here in Israel, there was a massive, incredibly controversial discussion

about the proper role of the Supreme Court here in Israel. And it's relationship to the Kinesa and the ruling coalition. And what did you dish a reform issue was about here, and it's interesting.

Both sides cast themselves as democratic. The coalition, the Netanyahu reforms, which were going to severely curtail the power of the Supreme Court, they were called democratic because the coalition wins the election.

What could be more democratic than that? Which is what we're talking about. The defenders of the Supreme Court's powers said, democracy requires civil rights. And if there's no constraint on the power of the majority,

there'll be nothing left to retain democracy because the civil rights will disappear.

That's the same thing that's going to happen

in the United States.

I'm going to predict and let you react to that and take us home.

There's been an enormous increase of power at the executive branch in the United States. The legislative branch is neutered, spade, picker verb. They've self-nutered themselves.

And the only thing that stands in the way of executive power

is the court. It's a weird thing because the court's pointed by the president, but it's proof by Congress. So it's tricky. But we've already seen that the attempts

by Trump, the Trump administration, to put in things that some people would say are overreach in terms of power. I'll pick tariffs as the obvious example. And this example that we're talking about right now,

the courts have been very willing to try to restrain

that executive power. So I'm going to predict that that's going to intensify over the next few years. And I would be shocked if the courts did not rule in favor of anthropic in this case.

Simply because they see themselves, and this was true in Israel too, whether they're right or not, they see themselves as a bulwark against that executive discretion and that unconstrained power. Now, when an executive gets into place

that the court has to like, it's going to be even a more complicated situation. And to some extent, well, the United States were complicated than that.

But I think we're going to see in the West generally

fights between the legal courts and the executive branch as to what democracy is going to actually look like in the coming years. Yes, I think the one functioning branch remains the courts.

And so they are this one lasting check on the sort of unfettered power of the executive. And that exists in a real tension because the courts can only do so much at the end of the day who enforces the court's decisions.

It's the executive. And once you start asking that question, that's sort of my point is once you start asking that question, you're in the law of the jungle. That point, sure.

And so I'm hopeful. And I have part of the reason that I'm a very close observer of the courts on a wide variety of different issues. Far beyond just AI really in tech related issues is because I like to observe this chess match in detail.

One thing that maybe is a note of optimism that I can give is that if you think about the courts as the last empire enforcing the rules of the game as written down, the laws that are written down. Well, then if you are a smart, long-range actor

who wants to win in court, it's incentive compatible for you to pretend like those rules of the game actually do govern your actions because then when you go to court, you all have a better case to be made.

And so like, I mean, I'm a big fan of a book called Homo Ludens, man at play by a guy named Yohan Haisengah from sort of old book, but it's a great book. And it's sort of we'd make this point

that like you should model the institutions

of classical liberalism as this kind of grand game. And if long as one institution that enforces the rules of the game, then maybe it's incentive compatible for the actors to sort of like remain.

But the problem is like, the court authority gets eroded

and it's like not always clear even today, it's not always clear that court rulings get observed. And we've Biden had this problem too. Biden ignores an aspect of court aspects of court rulings and so does Trump.

And so even that is starting to break down a little bit. And we could get into court packing. There's all kinds of shirts. Expanding the size of the Supreme Court is, that's why I said you can go back 80 years

if you want to, 90 years to think about what this tension. And so I just, you know, I'm definitely, I'm very grateful that the courts exist, but in the end, and this gets into this locus of control thing to bring us back to the middle of the conversation

about where is the proper locus of control and how should we be thinking of AI as this kind of new institutional technology? Well, one of the problems I have is that like, I'm trying to analyze this and think about

the appropriate locus of control in a moment when I'm also just candidly acknowledging

That our republic is not very good health.

And so there's a certain extent to which I have trouble

trusting, you know, the unfettered executive

to be the governing institution over AI, I have a lot of trouble with that. In a way that maybe I wouldn't have,

if this were 1923, you know, right?

Or, you know, Calvin Coolidge, we're president or something like maybe we would be in a very different world, but, you know, we're in the world that we're in.

And so I think that that should affect your,

that if, well, I don't want to be,

it affects my view of the accumulation of private power versus the accumulation of public power because the thing about private corporations is they don't have the monopoly on legitimate violence. And so maybe we build new checks and balances

in this way somehow. But I think whatever we're doing,

like I suspect that we are in a kind of new founding moment,

which is not novel for this country, but it's, you know, certainly we're in uncharted territory. - My guest today has been Dean Ball. Dean, thanks for being part of E-Kontalk. - Thank you, Russ.

(upbeat music) - This is E-Kontalk, part of the Library of Economics and Liberty. For more E-Kontalk, go to E-Kontalk.org you can also comment on today's podcast and find links and readings related to today's conversation.

The sound engineer for E-Kontalk is Rich Koyette. I'm your host, Russ Roberts. Thanks for listening. Talk to you on Monday. (upbeat music)

Compare and Explore