The AI Daily Brief: Artificial Intelligence News and Analysis
0:000:00

The standoff between Anthropic and the Pentagon exploded this week when President Trump directed every federal agency to cease using Anthropic's technology after the company refused to remove its...

Transcript

EN

Today on the AI Daily Brief we are discussing a question that is extremely ea...

and much more difficult to answer. Who controls AI? The AI Daily Brief is a daily podcast

in video about the most important news and discussions in AI.

All right friends, quick announcements before we dive in. First of all, thank you to today's

sponsors KPMG, Insightwise, AI UC in Blitzie. To get an ad free version of the show go to patreon.com/aideally brief or you can subscribe on Apple podcasts. To learn more about sponsoring the show, send us a note at [email protected]. While you're on aideallybrief.ai, you can also find out about the other projects in the AI DB ecosystem, including claw camp, enterprise claw, registration for which is going on right now. Basically, if your enterprise wants to learn how to build agents and agent

teams, or just more podcast related stuff like subscribing to the newsletter, which is newly rebooted. Now, if you've been listening this week, you'll know that we had something of a time of it getting back from South America. Door to door ended up being about 55 hours and that didn't include the seven hours that it took me to go drop off the rental car and pick up our old car, which was sitting at the airport parking lot. In any case, because of that, I had to miss Wednesday's show,

not something that I do very lightly, and so as a make-up, I had slated to do an extra show over the weekend on the day that I'm usually off. As it turns out, this was a pretty opportune week to have that slot open because my goodness. As Ron Burgundy would say, boy, that escalated quickly. I'm referring, of course, to the skirmish-turned-all-out war between anthropic and the Pentagon that came to a crescendo in a head on Friday night. The TLDR of

what happened is that not only did the Trump administration decide to decline to work with anthropic, they are attacking them in ways that go far beyond just to climb to do business with them. Now, for the necessary background and to get caught up with the story from where we left it, we actually have to go back to Thursday, when anthropic CEO Dario Almade released a statement about the dispute. Earlier in the week, you'll remember, Defense Secretary Pete Hegseth had given

Amade an ultimatum, removed terms of use limits by Friday or be blacklisted from the entire military supply chain. Anthropics red lines were that Claude should not be used for domestic surveillance of Americans or for powering autonomous weapons. Their stated view was that Claude is not reliable enough to power autonomous weaponry, and that AI surveillance is undemocratic, and perhaps more pertinently has underdeveloped legal safeguards. The White House's position

meanwhile was that a technology company should not be dictating how the U.S. government uses that technology, and should be fine accepting terminology that allows the U.S. government to use it

for all legal uses. Dario's post from Thursday begins, "I believe deeply in the existential

importance of using AI to defend the United States and other democracies and to defeat our autocratic adversaries." And it is worth noting here, especially if and as this conversation gets caught up in broader partisan talking points, historically speaking Anthropic has been more vocal about things like China not having access to advanced technology than some of their peers, whereas some of the other AI companies have been either fine with or actively lobbying

for the ability to sell into China, think specifically around in video and advanced chips, Omode and Anthropic have been consistent that they think that is a very, very bad idea. Point being, at least based on the history Anthropic is not a pacifist organization. Now in the blog post Omode continued, "An Anthropic understands that the Department of War

not private companies make military decisions. We've never raised objections to particular

military operations nor attempted to limit use of our technology in an ad hoc manner." However, in a narrow set of cases, we believe AI can undermine rather than defend democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now. He then restates Anthropic

objections to mass domestic surveillance and fully autonomous weapons. Now when it comes to those exceptions, he says, "To our knowledge those two exceptions have not been a barrier to accelerating the adoption use of our models within our armed forces to date." Then in one of the spicier sections he writes, "The Department of War has stated they will only contract with AI companies who are seed to any lawful use and remove safeguards

in the cases mentioned above." They have threatened to remove us from their systems if we maintain these safeguards. They have also threatened to designate us as a supply chain risk, a label reserve

for US adversaries that never before applied to an American company, and to invoke the Defense

Production Act to force the safeguards removal. These latter two threats are inherently contradictory,

one label says has a security risk, the other label's claw does essential the national security.

Regardless, he says these threats do not change our position, we cannot in good conscience as seed to their request. Now it is very clear that this public statement did not make Anthropic any friends in the White House. Assistant to the Secretary of War for Public Affairs, Sean Parnell, was diplomatic but clear that a Department of War has no interest in using AI to conduct mass surveillance of Americans, which is illegal. In order we want to use AI to develop

autonomous weapons that operate without human involvement, this narrative is fake and being peddled by leftists in the media. Here's what we are asking. Allow the Pentagon to use Anthropic model for all lawful purposes. This is a simple common sense request that will prevent Anthropic

From jeopardizing critical military operations and potentially putting our wa...

We will not let any company dictate the terms regarding how we make operational decisions.

They have until 501 pm on Friday to decide, otherwise we will terminate our partnership with

Anthropic and Dean MS Apply Chain Risk for the Department of War. Former Uber Official and Undersecretary of War for Research and Engineering, Emil Michael, was not so diplomatic. He wrote, "It's a shame that Dario Amade is a liar and has a god complex. He wants nothing more than to try to personally control the U.S. military and is okay putting our nation's safety at risk.

The Department of War will always adhere to the law but not bend to whims for any one for

profit tech company." Now coming into Friday, it seemed like the court of public opinion was sort of leading an Anthropic's favor. More than 200 Google and Open AI staff signed a petition that supported Anthropics Red lines, which you can find at notdefighted.org, and you even saw a bunch of comments like this one on that post from Sean Parnell, "Hi, Sean, just FYI, nobody believes this that it comes off as in genuine. I'm generally a

conservative leaning voter. I'm also pretty tech forward. I am wildly against this. Reminder that the entire tech lobby flipped on Biden for the exact same reason in May 2024." So that's where we're heading into Friday morning. Now, outside of the substance of the argument, it was pretty weird to a lot of folks that it was being had so publicly. As quoted by Axio Senator Tom Tillis said, "Why the hell are we having this discussion in

public? Why isn't this occurring in a boardroom or in the Secretary's office? I mean, this is softmoorick." So that's where we were heading into Friday morning. In the morning it seemed like at least open AI was lining up alongside their AI peers, or at least as CNBC put it trying to help de-escalate the situation. Late on Thursday night in a memo to his team, Open AI CEO Sam Altman said, "We've long believed that AI should not be used for master

valence or autonomously to weapons, and that human should remain in the loop for high-stakes automated decisions." These are our main red lines. In an interview on Friday morning with CNBC Altman said, "For all the differences I have with Anthropic, I mostly trust them as a company,

and I think they really do care about safety, and I've been happy that they've been supporting

our war fighters. I'm not sure where this is going to go." And while a lot of folks on social media were excited that Altman seemed to be lining up alongside Anthropic, Open AI was clearly having conversations with the DOD at the same time. Handy to set explicitly in that memo that they were exploring whether they could deploy their models and classified environments in a way that in his words fit with their principles. That was the state of things until 347 in the afternoon

eastern time when President Trump took to true social to right in all caps. The United States

of America will never allow a radical left woke company to dictate how our great military

fights and wins wars. That decision belongs to your commander in chief and the tremendous leaders I appoint to run our military. The left-wing net jobs at Anthropic have made a disastrous mistake trying to strongarm the Department of War and force them to obey their terms of service instead of our constitution. Their selfishness is putting American lives at risk, our troops endanger international security and jeopardy. Therefore, I am directing every federal

agency in the United States government to immediately cease all use of Anthropic's technology.

We don't need it, we don't want it, and we will not do business with them again.

There will be a six month phase-out period for agencies like the Department of War who are using Anthropics' products at various levels. Anthropic better get their act together and be helpful during this phase-out period, or I will use the full power of the presidency to make them comply, with major civil and criminal consequences to follow. We will decide the fate of our country not some out-of-control radical left-day I company run by people who have no idea what the real

world is all about. Thank you for your attention to this matter, make America great again. Defense Secretary or Secretary of War or wherever the heck you want to call them at this point Pete Higgs-Eath chimed in, this week Anthropic delivered a master class in arrogance and betrayal, as well as a textbook case on how not to do business with the United States government or the

Pentagon. Our position is never wavered and will never waver. The Department of War must have

full unrestricted access to Anthropics' models for every lawful purpose and defense of the Republic. Instead, Anthropic and its CEO, Dario Amade, have chosen duplicity, cloaked in the sanctimonious rhetoric of effective altruism, they have attempted to strong arm the United States military into submission, a cowardly act of corporate virtue signaling that places Silicon Valley ideology above American lives. The terms of service of Anthropics'

effective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable, to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on truth, social, the commander-in-chief and the American people alone will determine the destiny of our armed forces not unelected tech executives. Anthropics' stance is fundamentally

incompatible with American principles. Their relationship with the United States armed forces and the federal government has therefore been permanently altered. In conjunction with the President's directive for the federal government to cease all use of Anthropics' technology, I'm directing the Department of War to designate Anthropic a supply chain risk to national security. Effective immediately no contractor supplier or partner that does business with the United

States military may conduct any commercial activity within Anthropics. Anthropics will continue to provide the Department of War at services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America's warfighters will never be held hostage by the ideological whims of big tech. This decision is final. Immediately the lawyers jumped in to start figuring out what the heck the implications of all this

War.

Pentagon contractor or supplier can do business within Anthropics is effective immediately,

which seems absolutely insane." Under 10 USC 32 52 which is almost certainly the authority

Hegset has to rely on here, there are multiple requirements that D.O.W. has to fulfill before the SCR declaration becomes effective. They have to complete a risk assessment. They have to make a written determination that declaring Anthropics a supply chain risk is necessary for national security and that there's no less intrusive way to address the risk and they have to notify Congress. It's possible that D.O.W. has already done some of that behind the scenes, quick work

if so, but it's hard to believe that they fulfilled E.G. the congressional notice requirement in the time between 5 pm Eastern and Hegset tweeting. In all likelihood, it's just not true that the declaration is effective immediately as Hegset claims. Prince writes, "To put a finer point on what just happened, Hegset's post says that no contractor supplier or partner that does

business with the United States military may conduct any commercial activity with Anthropics.

Anthropics serves its models through the cloud. Its primary partner is AWS, but it also serves its models through Google Cloud and Azure, all of Amazon Microsoft and Google do business with the US military. If we take Hegset's post literally, Anthropics should now find

itself unable to serve its models via any of these providers. This is what Dan Primack from

Axios wanted to know as well. He tweeted, practically speaking, does this mean Amazon and Videa et cetera can't do any business with DOD? What about Palantir? Dean Ball, who to be clear, was integral in writing Trump's policy on AI, wrote, "In Videa, Amazon, Google will all have to divest from Anthropics if Hegset gets his way." This is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor. I could not possibly

recommend starting an AI company in the United States. A little bit after that Anthropics dropped a response statement that mostly sought to assure customers that they could just chill for now. They noted that so far all of their information is coming from the same sources all are information which is social media. Anthropics writes, "We have not yet received direct communication from the Department of War or the White House on the status of our negotiations."

They, of course, promised to challenge any supply chain risk designation in court. The business section was titled "What This Means For Our Customers." In which they write, Secretary Hegset has implied this designation would restrict anyone who does business with the military from doing business with Anthropics. Legally a supply chain risk designation can only extend to the use of cloud as part of Department of War contracts. It cannot affect

how contractors use cloud to serve other customers. In practice this means if you are an individual customer or hold a commercial contract with Anthropics, you're access to cloud through our API, Cloud AI, or any of our products is completely unaffected. If you are a Department of War Contractor, this designation, if formally adopted, would only affect your use of cloud on Department of War contract work. You're used for any other purposes unaffected.

Now, unfortunately, I'm sure Anthropics knows, as anyone who has studied either of the operation chokepoints over the last decade, that when it comes to government's exerting pressure on private sector companies to not work with other private sector companies, you need as a little push in an implication for those companies to ditch the offending vendor. A few minutes later, and by the way, this is all happening within the span of an hour or two.

Fortune magazine Sharon Goldman wrote, "Same often told open AI employees at an all-hands meeting on Friday afternoon, that a potential agreement is emerging with the Department of War, to use the start-up say-eye models and tools according to a source present at the meeting, and a summary of the meeting seen by fortune." The contract is not yet been signed. According to Goldman, Altman said the government is willing to let open AI build their own safety

stack, that is, the layered system of technical policy and human controls that sit between a powerful AI model in real-world use, and that if the model refuses to do a task, then the government would not force open AI to make it do that task. A few hours later, Sam Altman confirmed that a deal had gotten done. He tweeted, "We reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DOW displayed a

deep respect for safety and a desire to partner to achieve the best possible outcome." AI safety

and why distribution of benefits are the core of our mission. Two of our most important safety

principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DOW agrees with these principles reflects them in law and policy and we put them into our agreement. We will also build technical safeguards to ensure our models behave as they should, which the DOW also wanted. We will deploy forward deployed engineers to help with our models and ensure their safety, we will deploy on cloud networks

only. We are asking the DOW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

Egentic AI is powering a $3 trillion productivity revolution and leaders are hitting a real decision point.

Do you build your own AI agents by off the shelf or borrow by partnering to scale faster?

KPMG's latest thought leadership paper, Egentic AI Untangled, navigating the build by or borrow decision, does a great job cutting through the noise or the practical framework to help you choose based on value risk and readiness, and how to scale agents with the right trust, governance, and orchestration foundation. Don't lock in the wrong model. You can

Download the paper right now at www.

As a consultant, responding to proposals can often feel like playing tennis against a wall. You're serving against yourself trying to guess what the client really wants. That all changes with insight wise. Now you've got an AI proposals engine that thinks just like your client. It returns to the brief time and time again, picking apart your work, identifying key evaluation criteria and wind themes, and making recommendations to ensure you

stand out. Suddenly you're on center court, but this time you've got a secret weapon. Insight wise gets rid of all the time consuming manual work so you can focus on winning more business more often. Generate reports, Bill Insights from your own data, build competitive advantage and go to sleep before 2am. When it comes to proposals, you only get one shot, with insight wise,

make yours an ace. There's a new standard that I think is going to matter a lot for the

Enterprise AI agent space. It's called AIUC1, and it builds itself as the world's first AI agent standard.

It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before and it's just an absolute juggernaut right now, just became the first voice agent to be certified against AIUC1, and is launching a first of its kind insurable AI agent. What that means in practice

is real-time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third party certification and say our agents are secure, safe and verified, that changes the conversation. Go to AIUC.com to learn about

the world's first standard for AI agents. If you're looking to adopt an agentic SDLC,

Blitzy is the key to unlocking unmatched engineering velocity. Blitzy's differentiation starts with

infinite code context. Thousands of specialized agents ingest millions of lines of your code in a single pass, mapping every dependency. With a complete contextual understanding of your code base, enterprises leverage Blitzy at the beginning of every sprint to deliver over 80% of the work autonomously, enterprise grade, end-to-end tested code that leverages your existing services, components, and standards. This is an AI autocomplete. This is spec and test driven development at the speed of

compute. Schedule a technical deep dive with our AI experts at Blitzy.com, that's BLI, TZY.com. Now we'll come back to the reactions to that, but first let's try to summarize all the different strands of conversations that I saw going on all over the internet. Slightly reductively last night, I summarized the positions that I was seeing as one, and Thropic is right. There should be red lines we don't cross. Two, and Thropic might have a reasonable moral take, but the government

can't be constrained by them. Three, it doesn't matter whether and Thropic is right or not, a private company shouldn't set government policy. Four, not only should a private company

not set government policy, and Thropic's moral stance is wrong. Two, five, whatever, I think

if the U.S. government doesn't want to work with a vendor, they should just not work with a vendor, but maybe don't try to kill them, and six, punish the infidel's less the other up at the AI CEOs get ideas. As you might imagine, there were comparative with fewer of that last one, but they were in fact there. One interesting example of the Anthropic's right camp came from Eric Voorhees. Eric is the founder of VNSAI, and has long been an actual libertarian,

willing to call out policies he didn't like on the left and the right. He tweeted, "And Thropic is definitely woke and lefty, but they're refusal to permit Washington to use their tech to carry out warntless mass surveillance of Americans is eminently based." Time pointed out that the language of left and right kind of didn't belong here. They wrote, "None of us voted for dystopian AI spyware surveilling us in a way that makes the Patriot Act look

quaint." None of us voted for fully autonomous weapons on robots. I understand wanting these things in the AI arms race with China, but Trump's actual comments are shocking. It is not left-wing to want less domestic surveillance in fully autonomous murder bots. I think it's pretty safe to assume that most of America doesn't want AI used like this. Now among those who are really against antropic, mostly it came down to some version of Yabba China. Mike, three rights, people cheerleading for

Anthropic either want China to win the AI supremacy war or they're so politically brainrotted, they don't fully understand what's at stake, and think the US government just wants to use it as a tool of oppression. Geiger Capital Rights wanted to jump on here quick and say China doesn't give a crap about Anthropics moral red lines. We can argue both sides, but they won't. They are implementing AI into their entire military chain, and they are doing it with zero democratic

or civilian oversight. Now while a lot of the chatter was from the chattering class, one person who agree or disagree with his positions has been living in these questions for much

longer than basically any of us, is Andoral founder Palmer lucky. Palmer rights, do you believe in

democracy? Should our military be regulated by our elected leaders or corporate executives? Simingly innocuous terms from the latter like you cannot target innocent civilians are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a target versus

Collateral damage?

but unelected corporations managing profits and PR will often have a very different answer.

Imagine if a missile company tried to enforce the above policy that their product cannot be

used to target innocent civilians, and that they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really, in addition to the value judgment problems I list above, you also have to account for questions like, what level of information classified or otherwise does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? What if an elected president

merely threatens a dictator with using our weapons in a certain way? All our madman theory? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cut off? How might either of those determinations vary if the current corporate executives happen to like the dictator or dislike the president? At what level of confidence does the cut off trigger both in writing and in reality?

The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say, but they will have cutouts to operate with autonomous systems for defensive use, but you immediately get it at the same issues and more. What is autonomous? What is defensive? What about defending an asset during

an offensive action or parking a carrier group off the coast of a nation that considers us to be

offensive? At the end of the day, you have to believe that the American experiment is still ongoing,

that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run the country, without sourcing the real levers of power to billionaires and corpos in their shadow advisors. I still believe. And that is why bro just to agree the AI won't be involved in autonomous weapons or mass surveillance. Why can't you agree it is so simple please bro? Is an untenable

position that the United States cannot possibly accept? And again, even if you disagree with where Palmer is coming out on this, I think he is rightly identifying that this at core is a question of control. And by extension, and this is where it gets complicated, a question of checks and balances. Part of the problem in why people I think are sympathetic to anthropic is that checks and balances on executive power, I eat the folks in Congress, don't really seem to be doing their job.

Sure, a bunch of them like Senator Ed Markey and Senator Mark Kelly took to Twitter after this to say that Congress needed to be involved, but if that's the case, I think people can be forgiven for being a little bit cynical as they ask why weren't you involved before. Now for some, all of this is just a little high-flute and commentator sucks on Twitter rights. If you build a super weapon and it lives in a data center in the USA, it's not your super weapon.

You don't own or control it. The people with the aircraft carriers and nuclear weapons do.

This is how the world has always worked. Roman Helmut Guy, who you might have seen on social media

rights, satirically, "Hi, I'm a private citizen who developed a super weapon potentially a thousand times more powerful than nukes and now I'm selling it to the government, but I get to choose who they fire it at and how everyone please respect my decision." For many like Nathan Lans, this was inevitable. He writes, "People don't get it. AI is becoming critical infrastructure. It will power defense, finance, intelligence, everything."

If a private company can decide how the US government is allowed to use it, that's not ethics. That's corporate leverage over a sovereign nation. And yet still, for all the people who are sympathetic to that point, and there are many. It feels to me like where the majority of them get uncomfortable is not with the US government's decision to not work with anthropic,

but all the threats and the retaliatory action that it seems like it's coming with. Lindy found her flow rights. One, the government is rightly annoyed at a very important vendor thinking they can tell them what to do with their technology. Two, any company has a right to refuse service to anyone, including the government and army, at least when not in wartime. Three, this does not justify the government going ballistic

and treating them as an enemy of the nation. Adam Hulter writes, "As a conservative, I do not support labeling anthropic as a supply chain risk for refusing to comply with an all-legal purposes clause. I can also see why the Pentagon can't set a precedent of letting contractors dictate terms." So they should walk away from the deal and cancel

the $200 million contract. However, threatening to label anthropic as a supply chain risk

is an unprecedented action for an American company. Nothing anthropic is doing is dangerous for government contractors to use. All they're doing is providing two lines and two restrictions regarding how their technology can be used. You can't punish a company for not providing a service, especially not in peacetime. If you disagree as a conservative, I want you to think.

If the Biden administration were doing the same thing, would you be against it?

Dean Bolligan, perhaps feeling a little betrayed given that he shaped a policy initially for this government, went farther. Think about the power headset that is asserting here. He's claiming that the DOD can force all contractors to stop doing business of any kind with arbitrary other companies. In other words, every operating system vendor, every manufacturer, or hardware, every hyperscaler, every type of firm the DOD contracts with,

all their services and products can be denied to any actor at will by the Secretary of War. This is obviously a psychotic power grab. It is almost surely illegal, but the message it sends is that the United States government is a completely unreliable partner for any kind of business. The damage done to our business environment is profound. No amount of deregulatory vibes sent by this administration matters compares to this arson.

And some version of this, I think, was what a lot of people felt.

Gayle Wiener writes, "The whole reason Silicon Valley dominated for decades w...

Build here will protect your intellectual property we won't interfere with your business.

The courts work the rule of law holds. That was the deal. If you're a brilliant AI researcher in London or solar Berlin or Bangalore right now and you're watching the President of the United States threatened criminal prosecution against an AI company for having ethics, why would you build an America? Why would you incorporate there? Why would you put your IP under that jurisdiction? Trump just blew that up on X and all caps?"

Growing Daniel writes, "All of this is just so bad for the defense tech ecosystem. Like who wants to deal with this? What a crappy customer!" Strategy professor Kevin Brian writes, "Moral of this story is that no smart companies are going to do business with this government and Thropic built literally the world's best AI and integrated with the military as a national service. They fulfilled their contract precisely.

Result they are being treated like Huawei." Now, one part of the story that'll be interesting to watch over the next couple of days is how this will shake out narratively for both Anthropic and for Open AI. Self-proclaimed AI security hawk Peter Willdeford writes,

"I think it's important to circle back to Sam Altman here. About 20 hours ago people

including me were applauding his moral clarity, but that moral clarity lasted barely half a day. Altman sees a short-term way to torture competitor and he's going to take it no matter what happens to Open AI and Thropic the US or us." Trader Mark Florian writes, "I don't know who needs to hear this apparently all of Twitter, but Open AI did not just magically get the DOD to agree to the terms Anthropic was asking for.

Sam is blowing smoke to distract from the fact that Open AI just took the terms and Thropic considered so egregious at warranted jeopardizing an enormous part of their business. Assume all Open AI data will now be used for what Anthropic deemed mass domestic surveillance of Americans." And while I think there is, of course, massively, massively more nuance to whatever was going on behind the scenes with Open AI and the DOD,

that none of us who are commenting on Twitter have the actual context for, "One part of this story is going to be the court of public opinion." Signal writes, "As of this writing, cloth is now number two in the app store and there's a real non-trivial downsides scenario here for Open AI that many aren't really grasping. It's low probability, but structurally interesting. If a clean meme forms on TikTok and Instagram

tying Open AI to the Department of War, and that framing hits mainstream liberal users, the reaction won't be analytical, it'll be visceral. Most people won't parse contracts gop defensive use cases or historical precedent. They'll respond to timing and symbolism. And if this perception hardens, the competitive alternative becomes emotionally obvious. An association that feels morally dissonant could trigger switching behavior,

employee discomfort, media amplification, and even long-tail brand drift. I'm not arguing companies shouldn't work with the war department, that's not the point. The point is that in a memetic environment, perception compounds faster than facts. And if that perception locks in among a politically concentrated user base,

the second and third order effects on consumer AI could be far more significant than most people expect.

Now, what I think signal might be missing here is that this is already starting to happen.

It has been widely shared in progressive circles that Open AI President Greg Brockman is one of Trump's biggest donors this cycle, which has already led many to shift. For some, this is going to be confirmation that this is not a one-time thing, but an actual pattern. Katy Perry for one has already switched. Before all this went down, Mike Salana got at the damnable complication of all this writing,

am I wrong or is the situation just frozen at one? We don't want a forced private company to do something they don't want to do. Two, we don't want private companies running the military. Three, we are in an AI arms race with a country that controls its AI labs. I don't really see any satisfying answer here for a free society that also needs to maintain an edge against a successful authoritarian country, racing towards a potentially,

probably, eventually brand new doomsday weapon to be honest. Ultimately, Kristen Faulkner nails it when she writes, "The anthropic Pentagon standoff is not a text story. It's the moment AI ethics stopped being theoretical and became geopolitical. As AI becomes more powerful, the power to dictate how AI can and should be used will become even more sought after, whoever decides the ethics of AI

will be deciding the ethics of society." And so here is my positive note to end on. The situation right now for in-thropic, and for Open AI and the Pentagon and everyone else is messy. But for all of the rest of us, it's an opening. It is yet another reminder, a big blinking reminder that is cascading from our little corner of the world into mainstream consciousness of just how important these conversations are.

As the forces of partisanship always do, many will try to wrestle this narrative into

confirmation bias for their particular partisan story. That is in spite of the fact that at least right now, while the left and the right may in general have nudging impulses in different directions on AI, it is not in any way a hard-indercalsified partisan conversation.

That, I believe, is a good thing. It's too important to just be eaten up as another cultural

war issue. And so my plea is to ignore anyone who's trying to do that to this conversation. I'm sure there will be a lot more to cover as things evolve, but for now, that is where we are going to conclude this AI Daily Brief. I appreciate you listening or watching as always and until next time, peace.

[Music]

Compare and Explore