So, I'm kind of a, I'm kind of a, I'm kind of a chef type, a chef type dude.
Kind of a, you know, a lot of times people watch me at home and they'll be like, "Are you on the bear?" And I'll be like, "Well, I could say I made that mistake." I like to throw down for a little bit of the mantra, but, you know, for me, the one problem I have is, you know, the olive oil, I'm actually used is the proper olive oil,
I used to have an olive oil fountain in the house, like a chocolate fountain.
But it was sloppy, a lot of slippage, but graza, extra virgin olive oil, always fresh.
They pick, press, bottle, all their olives in the same season.
“You know me, I used to bottle them, you must have been 10, 20 years, same olives.”
You pick between the two blends, you got sizzle, you got drizzle, available in glass bottles, cool squeeze bottles. For everyday cooking, that'd be your sizzle, you're roasting, you're so tank, you drizzle, that's a, you dip a bread, you drizzle over ice cream, that's right, people do that, they also put salt on it. Look, I'm very sophisticated.
The bottles and refill cans are 100% opaque to block UV rays that degrade the oil and
it keeps it fresh. So head to graza.co and use TWS to get 10% off and get to cook in your next Chef Quality Meal. Hello everybody, my name is John Stewart. Welcome to the weekly show podcast. We got a
“banger for you today. As you know, the world is hurtling in no small measure towards its”
utter and complete destruction. And there's a new wrinkle in the destruction of our world. And that is that a lot of the weaponry that we seem to be deploying at the various places
around the world are being controlled by not necessarily autonomous, but large language
model, AI and thropic open AI. The same people that bring you cloth and chat GPT and help you break up with your boyfriend or girlfriend using a rhyming scheme that Drake would use. That's also being used to target and, and destroy our enemies. And it is incredibly chilling and just recently a huge controversy broke out into the open when one AI company and thropic drew a line and said, we shall not, we shall not allow our product to be used
in this way. And then another AI company there, what you call them there, the open AI went, we will. That's cool with us. And but there's it's a lot more nuance than that. It turns out there may not be heroes and villains in this story, but we are going to discuss it all today in an episode entitled, how are we all going to die? And when exactly is it going to be happening and we have two experts in the field of AI and how it is utilized and especially
within a military context, we have with us Dr. Sarah Schoker and Paul Scherry and let's just get to that.
“Ladies and gentlemen, we are delighted to welcome today on our continuing episode of I think”
we're all going to die. Our guests today are experts in the field of how we are probably all going to die. Dr. Sarah Schoker, who is a senior research scholar to University of California at Berkeley and Paul Scherry, Executive Vice President for the Center for a new American Security and author of four battlegrounds, power in the age of artificial intelligence. Thank you both for joining us here today. Thanks for being here. Sarah, let's start with you.
You worked in AI, you explained just very briefly your area of expertise as we move forward. Yeah, sure. So I used to be the lead of the Geopolitics team at OpenAI. That was a research team and we focused on a variety, up on a portfolio of topics relating to AI and international stability and currently in my role at Berkeley, I focus on new testing and evaluation methods for generative AI models and their potential impact on warfare and military AI integration.
Very, very apropos for today and Paul for you as well. Where do you stand on studying AI
On military and AI?
in the National Security field. I was an Army Ranger, did a couple tours in Iraq and Afghanistan
and then I worked for while in a Pentagon as a civilian policy analyst, actually led the group that drafted the Pentagon's policy on autonomous weapons, which is still in effect today. And then for the last 12 years or so I've been at the Center for New American Security, researching, writing on this topic, I'm trying to understand how is AI changing warfare
“and how do we avoid some of the bad scenarios we're talking about?”
So this is perfect because I think it brings in the perspective of Paul, you've been in the military, you worked in the Pentagon, you understand the ins and outs, Sarah, you've been at the companies
that are developing these products. So let's just start for the basics and I'm going to say this
for my audience. Obviously, I understand how AI is used in the military. Paul, very briefly, how does the military utilize AI and how is that different from their general practices? So it's not really, the military is using it like any new technology that they're going to try to find ways to be more effective, more efficient, much less like these computers and computer software and computer networks today. So the military doesn't necessarily see this as something
special or different, but really a productivity tool. Just like I think a lot of people might use a large language model and optimizer and optimizer, right? You're just optimising for something
a little bit different. Yes, slightly, but Sarah, you so as you were working at Open AI,
when they talk about optimising, are they developing at these companies? Are they particularly
“developing for military or is the technology that they're using just being utilized by military?”
Yes, so gender to the AI models are both dual use and also general purpose. They're dual use in a sense that they can be used for both civilian and military purposes for good and bad, but they're also general purpose in that they apply to a variety of domains. So these are models that can be used in legal applications for software engineering tasks as therapy bots. We now know some people use them as. So it is not particular, they're not trained for
particular use in the military, but you know, nevertheless the military, I think has been a keen adopter in the last year. I think it also will be remiss if I didn't add that even though most consumers now primarily interact with AI probably through these generative AI chat bots, AI is in fact a toolbox of methods. It is not exclusive to large language models or generative AI, and the military uses a variety of different AI techniques such as for example,
I'm machine vision, which is responsible for object recognition, facial recognition. So this is not just in the way I might use it where I would go on a go. I'm thinking of visiting the Jersey Shore, recommend five different things, and then the AI will say, boy, that sounds like a great trip because my AI is relentlessly positive, much to my sugar and then it'll list me a few other things. They're not just using it in that regard.
They're using the other tools of AI, which I guess would be optimizing for
“anything from targeting to maybe supply chain or any of that. Is that correct, Paul?”
Yes, you can think of maybe three different types of AI. One is something that's been around for decades that's really like hand-crafted software written by humans. Good example of this with a commercial airline autopilot, we kind of don't think of that as AI anymore, but once upon a time it certainly was, military has a lot of things like that in radars, in sensors, fighter aircraft, that kind of thing. So already autonomous workings for some of their machinery.
Maybe bounded autonomy, I would say. There's lots of missiles that once you let that thing go, it's not coming back, but their autonomy's pretty bounded in what it can do. Then you've got machine learning systems that might have a narrow application. So they're doing computer vision as Sarah was talking about. Military uses these to analyze satellite images, analyze drone video feeds. The military's collecting more intelligence that it can
possibly put human eyeballs on. They're just aren't enough human analysts, but the AI can help you then look through these images and find targets and identify things of interest. And then there are large language models, which are these sort of like much more general purpose text kind of machines where you can feed in lots of data, you can have an analyzed things, you can combine text and images and other types of data, and that's newer, and the military's also starting to
use that as well. Now this is so in the public's eye, because I want to see if I can fill in the
Gap between what the public may view this as and what the reality is.
it is sky net. It is, you know, robots, titanium robots that can regenerate themselves
“that are walking autonomously over crush human skulls and just firing what appear to be phasers”
at all kinds of different things. And you're saying actually it's the same shit that we're all using like at the office for the most part. I mean for the most part, somewhat different applications, but I mean it's the same types of things. And look, a lot of what the military does to be fair are back and functions, right? It's logistics, it's personal management, administrative and bureaucratic. It's administrative. That's like 95% of what the military does. Now there's a different component
that is actually battlefield capabilities, but a lot of the military use cases are kind of mundane.
So the battlefield, so let's get to that, because that's really where it appears this new controversy is, which is the battlefield. The controversy appears to be, and this began when, and Thropic had drawn two red lines. The red line being that there can be no just autonomous kill chains. A person has to be in the kill chain, and that the AI cannot be used for general surveillance on the American public or gross surveillance on the American public. Sir, is that understanding of the controversy
correct? Are those the two lines that are drawn? So I make a slight adjustment there,
which is that they specified autonomous weapon systems, not kill chains in particular.
Okay. What's the difference there? Tell me the difference there. Yeah, so an autonomous weapon
“system, according to the US definition, and it's, it's important that I'm noting that it is”
in fact the US definition, because different governments define autonomous weapons systems differently. Are weapons that can select and engage a target without human intervention? A human can be in the loop, but it's not required. These weapon systems can, can function without, without a human, without human supervision. The language that's used in the DOD Direct of 3000.009 is appropriate levels of of human judgment. And, and Thropic's position was that they don't believe the models
are sufficiently reliable. I agree. And that for autonomous weapon systems, they need a human of the loop, which is essentially already US policy. So the US policy is the human is in the loop meaning. So let's, let's walk through a scenario just to understand a little bit of what we're talking about. Let's say the AI is used to analyze satellite imagery and different targets. A human will then get the results. A human wrote the program. I'm assuming to analyze it.
A human will then get the results of this data that has been analyzed. Make their selections and then give an OK to launch certain weapons that may in and of themselves be autonomous, meaning they'll guide themselves to wherever that target is. And is that a minimalist description
“of how this might all go, Paul? Yeah, I mean, I think that's right. I think conceptually the”
idea would be, who's choosing the targets? If a human chooses the targets, then you'd say the human is in the loop, the human's making that decision. If the AI is choosing it, or the human, the AI is sort of recommending and the human's not really paying any attention, then you'd say, well, the machine is doing that, right? So one way to look at this would be after the fact, something gets blown up. Who said it was a good idea to blow this thing up? If the answer
was all the humans are like, oh, I didn't do it. Well, right? That's not a great outcome. I assume that will generally be the answer, right? But like right now, I think we're probably in the case. I've certainly had no reason to think otherwise, where the humans are the ones making those decisions. Now that AI might be helping to process information, helping to even maybe prioritize targets for people. But the debate between the Pentagon and Thropic is sort of a potential to
be about where things make a window future. I don't think actually it's a debate at the moment about using a large language model to like autonomously make these life-end decisions on the battlefield, and then people aren't paying any attention. Tara, does that sound, you know, is it that we're nervous that the computer will just decide on its own, or that it will be wrong when it
Targets.
and where the checks and balances are for that. So I clawed in the bathing system. Let me let me
“back get real quick. You said clawed in the maven smart system. I love the fact that it's named after”
something. You could name your cat. Hey, clawed. All right. So clawed is what? So clawed is the name that Anthropic gives to its flagship models, and which is then used in the maven smart system. This is an AI enabled decision support system that does a variety of things, including some of the tasks that Paul mentioned like helping speed up efficiencies and logistics, but it has also been responsible for targeting in Iran. We now have confirmation there as well. And if
you know, we're public reporting is anything to go by in Bloomberg and the Wall Street Journal
and others, the first day that the production of 1,000 targets in Iran has largely been credited
to the MSS, the maven, the maven smart system. Now, who makes maven smart system? Palantir does. I just, whoa, did you guys just feel the room get colder? I was, oh, the hairs. All right. So clawed, who is made by Anthropic, and that is more of an interface that we are accustomed to using.
“What is its role in feeding information to the maven smart system, which is a, I believe a system”
we're a less accustomed to using, and is maybe a little less transparent. So tell us, tell us how that operates. Yeah. So the maven smart system has been used for several years. Now, the integration of clawed, as I think relatively recent, I believe in the last, in the last year, because Anthropic was able to gain access, go through the certifications to gain access to the government's classified networks. As far as we can tell, clawed right now has been used in
targeting. And again, according to public reporting, it seems that it has been used for target selection, and then also target prioritization. The maven smart system itself is designed to pull in different data sources, so from sensors, satellites, and such, and try then, and clawed, then makes those disparate data more readable to the human analyst. So it boosts efficiency in that way, but it does also reading between the lines a little bit, it does also seem to
offload a little bit of human autonomy and decision making, as well when it comes to that target selection and prioritization process. Quite frankly, when you brought up a thousand targets, because I have no context, I don't know what I don't know. So I don't know if that's an unrealistic amount of targets. I don't know if that's, you know, I'm understanding that there are target rich environments that are target poor ones is a thousand in a day, you know, I don't know how they
“count it. Is that an unusual figure? Oh, yes, it's, um, I believe St. Com said that it was”
two X, the number of targets in the 2003 shock and awe campaign in Iraq, just to have a historical. So 500 targets in a day was was shock and awe, and this was a thousand. Now, I think we have to also take into account Trump math, which generally is like this is the biggest crowd ever to see, you know, in an auguration and it wasn't. So how much of that is, do you think is Trump math and how much of that is an astonishingly high figure? I mean, it's being reported by Bloomberg,
the Wall Street Journal, and the Washington Post all without an extract. I mean, they're all taking it at face value, and it's acting as though it is seemingly plausible. So there is no indication yet at this point that it's not accurate. Stop paying for too much wireless, just because I don't know, that's just what I do. It's how it's
always been. That's just my company. Mint exists purely to fix them. Same coverage, same speed,
just without the inflated price tag, you can change your coverage people. Mint is the premium wireless you expect. You know, here you're unlimited talk, you're unlimited tax to your data, but at a fraction of what others charge. And for a limited time, you'd give 50% off three months, six months, 12 month plans of unlimited premium wireless. The only thing keeping you from doing it
Is inertia, laziness.
pogo stick, be the pogo stick. That's probably not right. Bring your own phone number,
activate with these seven minutes, start saving immediately no long-term contracts, no hassle, with a seven-day money-back guarantee in customer satisfaction ratings in the mid 90s. Mint makes
“it easy to try and see why people don't go back. Ready to stop paying more than you have to.”
New customers can make this switch today and for a limited time get unlimited premium wireless. For just $15 a month, switch now at mintmobile.com/tws. That's mintmobile.com/tws. Limited time offer upfront payments of $45 for three months, $90 for six months, or $180 for 12 months, plan required, $15 per month equivalent. Taxes and fees extra, initial plan term only,
over 35 gigabytes, may slow when Mintmobile.com/tws, so you might say, Paul, let's say I'm working
in the military, you work there and you've been researching this, "Hey, Claude, I'm looking to take out all the radar installations in Iran." What would be, you know, where would that, where would I do that, and how quickly could I get it done, and then Claude would interface with Maven, which has all the data that it's gathered from, I'm assuming, satellites, and then it's translating that data that they understand through whatever Intel they've gotten, and they're going to place
“it into real world menus of what you could target, would that be accurate?”
Yeah, so let me, like, explain what we know, and then, like, what we could speculate reasonably about, because it is a little bit. Well, we know, and what we do not know. We know that, but what do you think is standard? So, all right, we know that Anthropics AI2 Claude is deployed on U.S. military classified networks. It's integrated through the Maven smart system, which collects intelligence from different sources, and it's been used by the U.S. military
in real world operations, including the operation against Venezuela and President Maduro, and operations in Iran. And there's been some public reporting that's been used, as Sarah was talking about in target generation and polarization. Like, exactly how we don't know,
“so now I'm going to speculate, or what might that look like, speculation alert for Paul?”
Yeah, so that could look like, you're talking to any tool saying, "Hey, let me plan this vacation to the Jersey Shore." There's somebody who's an Intel analyst or a targeting analyst who's going to these tools, and instead of having to manually go through all of this data that we have of where are the radars and what is the imagery of them? Queer is it in natural language? Hey, develop me, for example, a pre-ortization of all of the radars that have already
been hit, and what the current battle damage assessment is of them, how much have they been destroyed, or we didn't hit them again, for follow on strike, how much of them have not been hit yet, and let's put all that in a list, put in a database, let's prioritize it, and then let's match it to weapons that would be needing to take out these radars, different types of radars, might need different weapons, and then let's match that to available aircraft to help build a
strike package that would eventually go to an aircraft gets a set of targets and weapons that are assigned to that target, and so the technology is being used throughout that chain to make it just easier for people to access and process this information. So we would be doing that
anyway, it would just take longer. That's right, that's right. Now we're talking about basically
replacing the things humans are doing with machines, speeding it up, making it a lot faster. The US motors, thousands of targets in Iran, having the ability to process the information of machine speed, is very valuable for the military. And then because it's clawed, you could say, and now give it to me like your earnest Hemingway, and then it would give you the targets in short, taciturn, it would just be very turs, and go all that. So Sarah, are we kidding ourselves then
that there is a line? What is the controversy and how does it break down? What is anthropics argument here? Paul was saying earlier, it's really about the future, as it stands right now, what is the controversy? So I think the controversy in itself is a little mystifying, because it sounds like the contract negotiations went south due to some shall we say strong personality clashes. If you look at the contracts between open AI and
Anthropic, they're actually relatively similar.
both companies have essentially agreed to both red lines. The contract that they have with the DOD, or with with Palantir. Ah, so that's actually, we don't actually know about that yet. So, so stay tuned. All right, let's speculate some more people. Yes, stay tuned. It's not clear what model now Palantir might use, or if they'll have an array of different models that they can choose
“from. So who makes who makes the contract? Is it does Palantir subcontract to anthropic or open AI?”
Or does DOD, who is the leading role in integrating these companies together? So it's not unheard of. In fact, pretty common for companies to come together and actually combine resources to create a product, especially for defense purposes. So, you know, for instance, the DIU trial and the DAWG trial, that's the defense innovation unit, and also the defense autonomous warfare group, have a call for building essentially a treatable drones. And they've
issued that call to industry and companies have in fact responded to that call by combining resources and submitting joint proposals. So it's not unheard of for companies to come into contract with one another and then to approach the Pentagon. So they'll do that together. Palantir and anthropic or Palantir and open AI will get together and say, we've developed this package using, you know, our product makes it more readable for humans. Your product makes it more and so they'll bring it
to DOD. And they'll, so the $200 million contract that anthropic had.
“Paul, do you know what that? They had a contract with DOD? What was that? What was that for?”
And for how long? Yeah, I think so. This is where some of the details we don't really know. We know that they have an ongoing contract with DOD to deploy their AI tools on classified networks. We know they're being used through the Mavon smart system, but a lot of these details of like, we don't normally get when defense contractors are working with the government. In fact, they like silver lining to this whole thing is the only reason a lot of these details are coming out is because
this whole relationship blew up between anthropic and the Pentagon. Otherwise, normally, like,
they would have some deal about what the tools could and couldn't do, we would never go. And so that's
like, you know, I think it's unfortunate actually that this sort of few to spill over between anthropic and the Pentagon, but it is really the only reason that we have this kind of insight, which is even still putting limited on exactly what the terms of use of these contracts were. How opaque are these military contracts? I know that DOD is the only government agency that's
“never passed and internal audit, but how opaque are these and the $200 million that they use?”
Is that over a five-year period just to use their products on their classified networks? Yeah, I'm not sure that we know. Unless they're a seen more details than I have. You know, even as an employee, I do not have access to contract details. It's very tended in a lot of these companies and on a need to know basis. Now, these two are,
is $200 million in on you? I mean, to me, that's an enormous figure. You know, you're talking
about the Pentagon budget in total, obviously, dwarfs that one trillion now as they're pushing forwards, but still, it's an enormous amount of money. Do they have it with different companies? It's not, I mean, it's a lot of money for like a normal person. It's not a lot of money for either the Pentagon or for these AI companies. They're not, they're all doing it billions and billions of dollars. That's just walking around money. Yeah, I mean, it's a little walking around money
to anthropic, open AI. It's not quite money under the couch Christians, but like, it's not a massive amount of money. And the direct cost to anthropic of losing this contract is not substantial to them relative to like the scale of AI investment that's happening right now in AI sector. How much of the contracts for like open AI and, and, and, andthropic are consumer-based, in other words, I pay 1195 to get your latest model and how much of it is corporate-based and defense-based.
Do you guys have a sense of that? Yeah, I mean, I think open AI is right now for 2026 projected to generate about $25 billion in annualized revenue. The majority of that is coming from subscriptions to its models. I think anthropic is in a similar ballpark where they're on track to generate.
I think about $19 billion in annualized revenue.
in that it has prioritized enterprise contracts earlier on. But there is, I think open AI strategy,
and this is public, I, has, has been targeted towards generating more enterprise contracts in the future. But I do think that the majority are still coming from, you know, individual consumers, developers. Right. So the reason I bring that up is, it does mean, because we're talking about their opaque and they're tempted. But it does mean that the consumer has some influence here in that the government is not their sole benefactor. It really is individuals. Andthropic says,
"I'm drawing a moral line." Whether that moral line is an actual line or it's already been traversed by whoever knows, is a real moral line or not. And open AI says, "I agree with anthropic and we are drawing the moral line here, autonomous, weaponry, and mass surveillance."
Anthropic loses the $200 million contract and that same night, open AI announces, "Hey,
“we just signed a big deal with DOD." How real is that moral line that Anthropic through?”
And how real is the backlash against open AI for suddenly appearing to have turned around and said, "Oh, they won't do it. Okay, we'll do it." Yeah. I mean, look, the backlash is real and it's happened from some AI scientists andthropic vaulted after this controversy right to the top of the charts in terms of download and the app store. So I think that's happening. The dollar amounts for both these companies are relatively marginal compared to all of the other non-defense
investment. The bigger risk for Anthropic is going to be actions that the government is
already taking against the company, labeling them a supply chain risk, and going after them in that way, which would designate other defense contractors saying they can't use Anthropics AI tools in the further ones of their defense contracts. And then other steps to the use government might take to retaliate against the company. They talked about using the defense production act to seize control of their AI models, for example. So those are probably like
the bigger risk. It's not so much the dollar amount of the contract. And Sarah, it was at a real line and it appeared to an outside observer that openly AI immediately reversed their moral position, given what you guys are both saying is a very small amount of money comparatively for their bottom line. I'm not sure if there is an actual reversal. I do think that the military usage policies that are often designed by these companies are meant to preserve optionality
for its leadership. There was a lot of backlash that, you know, I saw it in real time, a lot of the AI community still congregates on Twitter, an open AI hosted and asked me anything on Twitter
“in response to that backlash, which I think illustrates, you know, the fact that the public”
can act as a pressure point on these companies, but what we ended up seeing as a result of that AMA was not necessarily an alteration to their previous policy, but adding more language to explain they're already existing, they're already existing position, which in practice, again, doesn't seem to be all that different from anthropic, but I think the communication strategies may be a little different. Right. I mean, I don't know if it's the cultural fascination with,
you know, the so-called great men of history, but I really would resist any kind of narrative that tries to identify a hero in a villain in this story. I'm not necessarily sure that those are appropriate roles for either anthropic or open AI, but, you know, to, to Paul's point,
“I think part of the sympathy that's been directed at anthropic is because they have been the”
target of government overreach, and so I think it's possible to hold two ideas in one hand here, which is that, you know, anthropic has been unfairly targeted, but at the same time, these two red lines that have been identified by both companies are probably inadequate, and the public does not actually have to accept those two red lines as the threshold, you know, threshold of risk. Imagine if you had some kind of like a rewards program, you know what I'm talking about,
like a mouse program, et cetera, but it's a rewards program that you pay rent through,
Then earn points for travel, dining shopping, et cetera, 2026, if you're stil...
without built, come on, brother. It's a loyalty program for renters that rewards you for your
biggest monthly expense, which is rent. With built every rent payment or in two points, you can redeem them, the flights, hotels, lift flights, Amazon purchases. So much more, it's a loyalty program that you pay when you pay rent that helps you get out of the place you're renting for like a week, and go have fun. And by the way, built members can earn points on mortgage payments. Got a house, whether you got a apartment, whether you're sharing it with three, if you're not
“go ahead friends, I don't know what your life is, I don't know what you're doing. You can even”
redeem built points towards your next rent credit, or even a down payment on a home, it's simple.
The pay rent is better with built. What, you pay and rent anyway, get something for it.
So join the loyalty program for renters at jointbuilt.com/tws. That's J-O-I-N-B-I-L-P.com/tws. Make sure to use our URL so that we say it. But are we kidding ourselves, Paul, in that, you know, look, if we think through history, there's no human advancement that has almost immediately been sought by the military for advantage, whether that advancement is sonic or chemical or biological, you know, Sarah mentioned two departments over at defense that I would pretty much assume nobody who's listening to this
has ever heard of. You know, I think we've all heard of DARPA, but there are development groups, I'm assuming, you know, they said when they went into Venezuela, they used the, you know, the Havana syndrome, liquidation, new weapon that like you pointed at people and their insides melt. Like we are throughout history, any advancement that a human can think of, their military wing is immediately going to try and utilize for some advantage, no? I mean, yeah, but look, two of the
examples you gave their chemical and biological, we do have regulations on how they're used. We have conventions banning chemical and biological weapons, but people still use them. People still use them, right? But not everyone, and they've been sort of by, by many states, they've been treated as unacceptable weapons. And you get some prize, you get some outliers, you get people like Sad Hussein or Bashar al-Assad who are going to use them still, but most states have given up
“those kinds of weapons, and I think it's better that they have. So the question with AI is not actually”
are we going to use AI in the military? None of these companies are saying don't use AI in the military. The question is, should there be any rules? And if so, who sets those rules? Because like the sort of crazy thing about the disputed amount of autonomous weapons, is this news I can take? No one is actually saying we're going to use a large language model as an autonomous weapon today. That'd be crazy. If you have a large language model right in email for you,
you better fact check that email, right? Because they do weird things sometimes. The question is, who gets to set the rules? And the Pentagon's answer is, when we get to set the rules, we don't want these companies dictating to us. And these companies in many of the scientists working there, they have a lot of discomfort about how the technology might be used going forward in the military. Sarah, you know, when you say about who sets the rules, is it the company, or is it the
military? So we also, and I've read about this group, they are, they're called Congress. We don't hear much from them. They're, it's this group of generally older white men who once their past retirement age enter into the legislative house. Is Congress, are they utterly rudderless here? Are they just overmatched? Do they have any role to play? What can we expect and what should we expect from them? From, from. That wasn't, that wasn't optimistic.
You know, let's start small, start asking questions. I am somewhat sympathetic to this idea that, you know, private AI companies cannot be setting the rules in foreign policy. But one of
“the issues that I see today, and I think this does track, track with a role potentially for Congress,”
as well, is that AI companies are, in fact, influencing foreign policy. It may not always be through
the back end and through their contracts, through the Pentagon, but they're certainly donating significant sums to lobbying efforts and tying those donations to US China tech competition and arguing and arguing that that low or no regulatory environment is a requirement to, you know, quote, unquote, each China. And they're supporting
Potential, you know, political campaigns that agree with that perspective.
is, in fact, coming for Congress, and they probably better be equipped at the very least. And, you know, I actually think Paul may even be a better person to speak on this in particular, since he is, in fact, in DC, and I would be curious to hear from him what the general reaction has been from Congress on this issue. But I can say that AI researchers typically are very keen
to discuss their work. And I've, in fact, never met a keen or bunch of people who are willing
to talk about, you know, the risks and opportunities related to AI models. So they are, you know,
“you can always send them an email. Yeah, I think they're pretty eager to have those conversations.”
Paul, what's so, so what say you down and down in Washington? Yeah, I mean, look, I'm here in Washington now. I can see the White House out of my office window here. I'm not going to pretend things are super functional in Washington, but, you know, I think we have seen government engagement on some of these issues. And there, there are a lot of tools that Congress can use to have oversight of the military and the intelligence communities.
One is passing legislation, which may or may not be the right answer in some cases on the domestic mass surveillance stuff, maybe on the autonomous weapons, maybe not. We might want to maintain some flexibility there. But there's other things. Congress could hold hearings. Congress can get people from the executive. Yes, they could. They could. That is correct. They get a people from the executive. It might come in and brief them. So hey, what are you doing with AI? And if you
want to keep it classified, Congress can do classified briefings to educate them about what's going on inside the military. Congress can use tools like procurement and acquisitions. Congress has the money. They are the ones that are allocating money to the military and intelligence community. And so that is a tool that Congress absolutely does use already to fund some projects and not fund others. And so like there's a variety of tools that Congress has, potentially
to influence these things. And I think the model of who should be setting the rules? Maybe it's
“our democratically elected representatives? Let just probably the right approach. Well, that's what”
I was thinking. But to Sarah's point, you know, look, these guys have more money than anyone. Right now, the money is in AI. Now, obviously, they're using a lot of those billions to build
data centers that we have sort of no idea where those are all going. But 25 million here, 25
million there. Elon Musk puts 350 million into political campaigns. The amount of money that's flowing from the tech sector is like nothing we've ever seen before. Do you think that's had the effect that maybe the AI companies want, which is to regulate us would be they've portrayed it as national security risk. They've portrayed it as it would cause us to lose to China. Has that been effective? Or is it that they're overwhelmed by not really understanding the gut nuts and
bolts of AI? Like, you mean Congress not understanding the nuts and bolts of the Congress? That's right.
“Yeah, I mean, I think there's a lot of I've actually been super impressed when I speak with.”
I mean, you can always find video clips online if some Congress member not understanding something.
I would use them on the show. Yeah, you know, I'm really cute. Okay. But I think like I've been impressed when I speak with members of Congress and their staffs, how knowledgeable many of them are about the technology and what it can do in its limitations. So I think there's always work to be done in terms of improving tech literacy and Washington. But I think some of the bigger challenges is just sort of getting over the hurdles in passing legislation and getting agreement,
whether that's around federal regulation of AI or data privacy or social media or other types of, that's just, that's actually really hard for Washington to do to pass legislation on these kinds of issues. Sarah, how, you know, you spoke with this earlier, it's this great, here's why I'm very nervous. I've met a couple of these folks and they do not seem particularly enamored with humans. I don't want to say outright misendthropic, but you know, Peter Teele was asked
famously in a conversation, you know, should humans continue and you know, he paused. I think for a pretty considerable amount of time before he went like, well, you know, and transhumanism. I once asked Sam Altman about the disruption that AI is going to cause to our workforce and that small amount of time in which it's going to cause it and his response was just, he literally just looked at the question was five minutes long and he just went,
"We'll be okay." You know, how concerned are you with with these great men and how great they actually are and what is their connection to, do they understand the damage that they also can do
Or are they mega-lamaniacs?
but I would say that if they're able to cause harm, it's only because they are powered by
immense wealth and the high valuations of these companies and also by institutions that allow for corporate donations and excessive individual donations as well. So they're essentially enabled by our current institutional structures. In terms of whether these companies discuss the
“downsides, I mean, I joined in 2021. I left in 2025. There was a period where I think that was the”
dominant topic, you know, topic of discussion, right? Are these tools actually going to increase productivity? Are they going to replace tasks? Are they going to replace workers? Can they enable the proliferation of potentially weapons of mass destruction? And there were, there was testing any valuations that began to try and answer those questions. So I think certainly the researchers at these companies have tried to make a concerted effort, but these companies are also complex
organizations and they're always factions that are budding heads, right? Some people do prefer
a low-to-know regulatory approach. They don't want to see state legislation. They prefer everything at the federal level. And then there are some who are at these companies who are actually quite supportive of state level legislation. So it really depends, I mean, I think of, you know, open AI and anthropic and frankly, other companies is often going through eras where
“certain factions went out over others and that's what ends up setting the cultural mood of the”
company. Do they understand the weight of what they're making? You know, I can't help but go back to Oppenheimer. And I, you know, when you have something that looks like it could be extermination level type technology, positive and negative. I mean, if you split the atom one way, we get energy that can power the world, if we split it this way, you can blow it up and we all know which one
we tried first. And it felt like the people who were making that weapon did it under
the crucible of the Nazis. And so they developed it with this idea that, well, if the Germans get it, we're all done for. But it was clear that they at least felt the burden of that. Paul and your experience, are they feeling the burden of this? Because what Sarah's talking about is, well, they did go through all that testing. We don't really know what the results of it was. And they seem to have gotten past that reservation. I mean, the AI scientists and engineers that I speak
with, particularly those in the frontier, let's are very concerned about AI risk. They, I think, understand better than anybody. Actually, the downsides of the technology, the way that it could be abused, the way that it could just do sort of strange things that might be surprising. I think one of the challenges here is there are incentives for the companies to move fast, to ship their products, because there's the sort of perception of a winner-take-all dynamic
in the marketplace. Now, we have seen in other tech industries in our systems, handsets. Well, yeah, I mean, in a way that sort of commercial race to dominate the marketplace, and that that does drive incentives. And these companies need a lot of money to build the data centers for training the AI. So I do think the individuals take it seriously. And I think some of the companies, I mean, if you look at what anthropic just did, I mean, they sort of
stuck to their guns on this decision in a way that is going to be costly for the company. How costly would we just don't know, but they decided to do that. So I do think the companies take these issues pretty seriously. And if I can also add, I mean, I just set the risk of potentially misspeaking the testing and evaluations that were done and our continued to be done at these companies, they are often released publicly. But, you know, of course in certain
areas like, you know, seabirds, that's chemical biological, radiological, nuclear testing, and then also cyber. There are greater restrictions placed, placed around what can be shared with the public, but there are even reports, summary reports about what that testing looks like. And then a lot of the benchmarks that are used by AI industry are in fact publicly available. It just so happens that testing and evaluation of these large language models
“is still in a relatively nascent phase. And it's not always clear what the best way to test”
these models are if what we're trying to do is use them as proxies for social impact or risk. And is that, you know, the famous one is now, you know, if you remember the movie War Games
It was, you know, the first sort of kind of just still be in look at what wou...
when computers take over was the Matthew Broderick movie from when I was a kid. And it was about
a nuclear war game gone wrong. And the computer just started launching, you know, nuclear weapons
“at all the different countries. And at the very end, the computer said the only way to win is not”
to play with AI apparently. It was more apt to launch nuclear war than humans or standard computers. What do you know about that testing and, and is that apocryphal or is that, did that really happen? I mean, it did really happen. I, you know, I think a variety of researchers at academic institutions have now managed to replicate the findings. The models have a tendency to escalate more aggressively than humans would. And it's not really clear why
the models do that. One theory is that in the training data, aka the internet political
scientists have a tendency to study wartime escalation rather than de-escalation. So that may influence how the models respond to these war game types simulations. But I mean, that in itself is of course a cautionary tale around using these models for approving the use of force or for decision making or frankly even for war gaming and simulations. Is it possible, Paul, that AI because of how adept it is at creating these targets and all these other things that it actually
made going into Iran more appealing that before the age of AI, we might have been more circumstance about the type of attack that that we launched. Are we seeing barriers to military action fall
“because of how quickly these models can they bring a sense of false confidence?”
I mean, I don't think today that's true. Like, I don't think AI was a factor in President Trump making this decision and it was based in large part on the U.S. strike against Iran last summer against the original program being very successful and limited and then the rate against to grab Maduro being very successful and limited and this sort of like, okay, having a couple perception of having a couple winds under his bill. I'm using the military.
It seems to be effective. No downside. Sure. Right. So I think those are probably bigger factors. I think what you're describing could be a risk going forward. So one way in this could be a risk is some of the things that military's count and try to calculate when they measure military power are things that you could see and you can count. You can count how many tanks somebody has, how many airplanes, how many ships. Then there are some things that matter a lot
that are hard to count. We see this unfolding in the war and Ukraine, the morale of the troops on the battlefield. The Ukrainians are fighting for the homeland. The Russians are conscripts. They don't want to be there. The leadership, the quality of the unit cohesion. Those things matter a lot but they're really hard to measure. So one possibility going forward is you could see a world where as more and more military power gets embedded into software and data in AI.
It's kind of hard to measure that. That's like, well, we have this AI and it's amazing and it's
wonderful and ours must be great and there's this, it becomes harder for military and countries to sort of gauge what their relative level of power is, and you might see more miscalculation, you might see countries sort of assuming what we have this wonderful technology and we can win and the world will be over quickly and we'll all be home and if there's not to be true,
“countries have made this mistake before that's what happened in WWI. It's like, we've made it quite”
a few times. You might have done this. I think that is a possibility that could happen but we're not there today. Sarah has anybody studied the confidence, you know, there's a certain thing in bars like there's a beer courage. You get a couple of shots and you get a couple of beers and you're like, you know, it turns out I'm a tremendous MMA fighter and I think I'm going to, you know, you get a weird confidence from alcohol. I find you get a weird confidence when you
use AI. When you use those models, you tend to be much more assured in your decision-making because you feel like you have this kind of infallible being behind you. Has anybody studied AI confidence in decision-making? Because I feel it when I use it for the mundane tasks that I do. You know, I'm not sure if I've seen anything like that. That's a really interesting, that's a really interesting point. I mean, I think what you're referring to, I've heard some people
talk about chat boss or frankly any type of statistical analysis that's used to make decision-making
As applying this mathematical veneer, right?
It removes the human qualitative or subjective element to it. You know, the issue that I just
keep going back to is of course that these models are not always going to be reliable because they are
in fact statistical prediction machines. I mean, they're useful. Don't get me wrong, but they're not. They are inevitably going to output something that is incorrect. And so being able to keep appropriate human judgment and to create a system in such a way that
“people do not abandon their critical thinking skills is a very important facet. I think to any”
type of human machine teaming that we're seeing today in military AI integration. Is that something the military is concerned with Paul? Because, you know, in looking at it from like, let's say from an educational standpoint, there's been a lot of studies that show that when kids start using this, their ability to do that to think critically, to reason and all that falls, that it becomes this crutch that when utilized, you no longer develop those kinds of
skills and ways of thinking. Does this become a crutch for the military to use? And the second part of that question is, is are we ignoring this whole other area, which is hay, clawed, or hay,
maybe, or whatever it is, design me five nerve agents that the world has never seen before.
You know, is that another usage that we're not so far we're only talking about chain of command? Is there a whole other area we're not even really thinking about? Yeah, well, that is certainly a risk that potential for AI to enable biological weapons and to maybe even lower the barrier to countries, to non-state groups, to terrorists, to do so, maybe not today, but that's a concern down the road. I think in terms of military usage,
the military is actually pretty keenly aware of for people in the uniform, they understand the responsibility that they have. Okay, if they're going to launch this missile, they own where that missile goes. And I think there's a couple of concerns one would be making sure that they really understand this AI system. Like, what is it going to do? Is it going to do something strange? Is it going to fail? How's that going to work? Inturing that there's human responsibility and
“accountability, I think it's actually quite important to the military that's part of the military”
ethos, but it's challenging for a lot of these AI systems, because it's not like a traditional computer program, where, okay, there's an accident, you go back and you say, oh, this is the line of code that caused the problem. Now, the answers embedded in this massive neural network with billions of connections, you know, like, why did it do that? I don't know. And so it gets into these issues of trying to evaluate the models performance, what are some conditions in which it might
be biased in certain ways? They tend towards sick if it's towards basically telling you the answer that it thinks you want to hear. Well, that could really be a problem in some national security application. Do you want to tell analysts? And you're like, ask it's a question that it's like, well, yeah, you know, this is what I think you want to hear, right? So that was an appolyan's whole
“issue. They were like, sure, boss waterloo. What a great idea. You should go there. Yeah,”
now this is going to sound ridiculous, but does it do like what it does with us, which is, would you like me to give you a 10 day bombing plan? Would you like me to add in other targets that may seem ancillary, but might have melted? Like, is it, is it, is it that casual when it's
describing, you know, what it wants to do next and how quickly does it do that? I have never used
the main event smart system. And so I don't actually know what, you know, what the personality of the chatbot is. Or is that what they use clawed for? I mean, you bring up an interesting point though, right? And that these models can be fine tuned with different personalities to be either, you know, more acquiescing less acquiescing. We know that users, of course, like to be fond over a little bit, but it's, you know, it's, it's possible that it's not presenting
information in the most neutral way out there. We, we just don't know publicly. I don't think. Right. Do you know Paul? No, I don't know. It's an interesting question. I think one way to think about these models is they're sort of role playing. They're playing a role that's in their training data. And then that can be fine tuned by additional training that they get from the companies. And so that's why you get the sort of personality and different personalities among the different
models. So it's an interesting question of like the ones that the military is using or the intelligence community. What are they sort of trained on? And are there hidden biases that might be kind of subtle that are hard to detect? I mean, that's I think a difficult problem or not so hard. And I just got a chilling feeling that they're training it on the headset. And so they plug something
In and the model just posh back.
I told you about my invention, the crumple, the crumple. It is a topographical blanket for dogs,
but not the same topography at each time. Every time you throw it on the ground, it changes its topography. It is an amusement park for your dog to find a place of comfort and warmth, but also with interest. It's not the same old, all right. This is where I put my right paw. And this is where I curl my butt. No, it changes every time. It gives them. It's like visiting.
“It's like Epcot. It's an Epcot center blanket for the dog. And have I started this business yet?”
I have not. That's right. To the great dismay and disappointment of our audience. And maybe humanity writ large. I have not started my crumple business. And I'm going to tell you why. It's too hard.
It's never, I don't know how to do this. It's daunting. But you know, you have Shopify here.
Makes it easy for people. You can get started with they got a design studio. Hundreds of templates help you build an online store. It can match your style. You can do this. They also have 24 hour customer service support. Workplace expertise. And everything. It's the commerce platform behind millions of businesses around the world. And 10% of all e-commerce in the United States. It's time to turn those whatives into
with Shopify today. Sign up for your $1 per month trial today at Shopify.com slash TWS. Go to Shopify.com slash TWS. That's Shopify.com slash TWS.
“So these are like some some I think some really difficult problems with the technology that we've”
got to find ways to work through to use it in ways that are safe and effective. And I don't think
they're easy answers. I think the technology is some strange and new challenges associated with it. Sarah, you certainly are having a really balanced but also nuanced view of this. What keeps you up at night is there's something about this that you think about as particularly challenging. Yeah, there while there are many challenges. Let me see if I can narrow them down or throw them all out there and we'll go through them one by one. Right. I mean, I think about the the challenge
related to global governance. I mean, for over a decade now, over 90 plus member states have been meeting at the United Nations to discuss regulating or the the possibility of regulating or even introducing a treaty instrument that would regulate lethal autonomous weapons systems. But because of the nature of the forum, that which these discussions are taking place. It's a consensus-based body. It's at the convention on certain conventional weapons. It's very unlikely that a treaty
based instrument is even possible in this in this space. I mean, you can think about how hard it is to, you know, pick a restaurant with you and your five friends. Now imagine that you have 90 plus governments trying to decide what to write. I can kill all of us. How have they been able to do it? Why can't they use the model that they used for atomic weapons? Oh, I see. Well, so there are, I mean, I guess there are a few reasons for that. So the convention on certain
conventional weapons, it's really in the name. It is talking about conventional weapons. And autonomous weapons, the conversation around them has really focused on trying to preserve meaningful human control to discuss whether that's even possible, whether they can actually discriminate between combatants and civilians. And if they can, in fact, discriminate between combatants and civilians to an extent, then they technically could be legal under international humanitarian law,
but militaries would still need to abide by the existing international legal order and international humanitarian legal principles. And the good thing about this particular, um, about this particular forum is that though, you know, regulation with teeth is probably off the agenda, um, most states have been able to have have, um, consented and reaffirm the norms around international humanitarian
“law as applying to autonomous weapons systems. So that's, I think, also a silver lining as well.”
Has anybody kind of gotten it right? And Paul, I'll ask you because, you know, maybe you see ways through this from being in Washington, but, you know, has a European union done a better job with this? Has, has any governing body, has any international body? Is there any pathway here that you see that could help establish, at least the beginning of guardrails? I think that actually the
Best avenue we have is starting at the level of AI hardware and then sort of ...
domestically, eventually globally kind of from the ground up, explain, explain the difference between hardware and, and the software. Right. So, so the thing about these AI systems that it's kind of
amazing is they require massive amounts of computing power to train the most capable models into
deploy them at scale. Now, you can make smaller models that you can deploy on a laptop, for example, or some other kind of edge device smartphones, but they're not as capable, but the most advanced ones are going to be really big. They're going to have to run in the cloud. They're going to need really advanced chips and to deploy them at scale as a society. You're going to need a lot of these really advanced chips. Well, these chips are made in one place on earth, Taiwan. Taiwan, now that does
not on the face of it seem great that it's an island 100 miles off the coast of China that China
“has pledged to absorb by force, if necessary, but it is a consecutive drawback, I think.”
That's right. Not the best geographic position. However, these phabs that TSMC has in time one were the most advanced chips are made, depend on technology from three countries in the world, Japan, the Netherlands, and the United States. And without that technology, they cannot make these advanced chips. And so that sort of starting at the hardware level that actually is like a really narrow choke point to begin to then control the technology. So the deal we just made with UAE
to give them the chips, the previous concern had been that they would then sell the chips to China, did that just blow a hole in the net? Well, the bigger question is like, what is the global diffusion of this hardware look like? At the tail end of the Biden administration, literally the last week when they were on office, they dropped this like very complicated rule, called the
“diffusion rule, that basically would take US expert controls on the most advanced chips to China,”
which you've had for several years now, started on the first trip administration, and expand
that globally. And it's kind of tiered system, or depending on what country we're here, you can get so many chips, it's a little complicated. Trump administration threw that all at the window. But I do think that like the chips themselves are a way that we could begin to shape who gets access to the hardware, who can build the data centers, because they need these chips to do it. And that's a hook for guardrail. So you can say, are you want to buy all these advanced chips? I want to see your
domestic regulation surrounding making sure that people aren't going to use these chips to make a biological weapon. Like we did it with enriching uranium, and the things that you would need to be able to do that. That's actually not a bad analogy here, right? And we're okay, you can get uranium for peaceful civilian nuclear purposes, not to make a bomb, and we found ways to separate those two, not to enrich it to that level, right? So the idea would be the same thing. You can use
these chips for peaceful uses, basically most everything, but you can't use it to make an offensive cyber weapon. For example, and put some guardrails on how the technology's used. Right. And inspection Sarah, is there any fear that like, by the time we figure this all out, quantum computing is the new standard. And that's pushed us in a, so by the time we figure out,
okay, these three chips are crucial to any ability to do that. And then somebody else comes in
and it says, actually, that's not stated the art anymore. Are we moving so quickly that suddenly quantum computing is the power that's necessary to drive these? And that's a whole different
“can of worms. I think you're now learning in real time that AI researchers aren't necessarily”
experts in quantum computing. And I am the worst person to answer that question. We, because the reason why I bring it up is I just read an article about it. And I have no idea what it is. Someone was describing that actually quantum computing is going to be wildly preferable to large language models. And I was unable to understand the difference. Is there knowing that you're not experts in this? Is there a sort of remedial version of what the difference might be? Paul,
do you have any idea about this? Yeah, I think so. So we are seeing some progress in quantum computing. I think I don't think it's going to like change this picture in AI for a couple reasons. Okay. One, the, quantum computing will become valuable over time for like some very niche kinds of computation. But not necessarily everything. And I don't think what large language models or other large neural networks are doing today. It's also like the case that we're just,
we're not seeing in quantum computing this kind of really rapid exponential growth that we're
Seeing in AI.
is doubling about over two years. That's like really growing very, very quickly. That's not true.
That's the productivity of it. That's like the efficiency of it. Okay. Right. So that's really
“powerful. That's what's allowing this massive growth in AI. It's one of the factors, data,”
better algorithms are factor two. We're not seeing that kind of exponential growth in quantum computing. It's really hard science. It's like difficult physics. It's, it's much more traditional science. People are making incremental gains. I think we're going to continue to see progress. But I, I'm a skeptic that we're going to see this like transformative leap ahead in quantum computing and say the next five, ten years the way that we're seeing with AI right now. So
information, the drama that we're seeing between anthropic and open AI, that's really the soap opera
story. And there's not necessarily a lot of there there. It's the general competition between these companies that are going to try and establish primacy in, in the realm of AI models. Military application is just one element of the revenue streams that they're pulling in there. The real sort of where you guys are really looking at is that interface between who are we going to end up trusting more. The humans that are developing the AI models, the humans that are running and integrating the AI
“models or the models themselves, would that be kind of where the real tension is going to play out?”
I mean, I think it's fair. But I would just add that it's not going to be only one technical, you know, it's not, it's not only just going to be safety through the technical stack or only safety through the law or safety through regulation and policy, right? It is truly going to be an all of society effort and in part because AI again, general purpose and it can be used across of our idea of application. So a one size fits all approach to safety is probably not going to work.
Is it akin to the battle against climate change? And if so, that we haven't done a great job there. So is that does that give us a pathway not to follow? I mean, I think any pathway towards AI governance is going to be through cooperation and I don't want to be overly cynical here. And so I'll try and draw in a positive, a positive example. Oh, go full, go full cynical. Um, I'm going to go what I'm going to give you one positive example.
“Just one. I think I've been. Yeah, there's plenty of sitting. Come on Sarah. It is.”
So under the previous administration, um, there was a, uh, they launched the declaration on the political declaration on military on military use of AI in autonomy. And that was a voluntary decline in the declaration with principles and norms and around 60 countries signed on to it. And in that declaration, it really centered international humanitarian law and also civilian protection. Um, those conversations can resume. Uh, those diplomatic conversations can resume.
Really that what's stopping right now is is political will. And that process can, in fact, happen alongside the existing UN processes as, as well. So there isn't really a way out of this that doesn't involve talking a lot to other people. Um, but that there is something there to build on. Is the cynical version of that, that international norms and rules seem to be in disfavor with the current, uh, sort of, uh, I guess what you would call large power, uh, politics that
seemed to be playing out with that have been, is that your downside? Yeah, I mean, I think that's probably fair or we're dancing around lots of things. It is, but you know, at the same time, people can continue to demand this through Congress. We mentioned Congress earlier. I see a role here potentially, right? Um, if they want, if they want to do something, you know, if they're, you know, if they're counting on your student's doctor, I'm counting on your students at
Berkeley to be able to come up with, uh, yeah, uh, a way through it. Paul, what, what keeps you up at night and and give us a nice balance between cynicism and optimism on on the way forward that you see.
Yeah, look, I think the reality is the technology is going to bring to us a lot of challenges.
How is it used by the military? Um, what are some of the risks in cyber security? Like a little bit about the risks of AI and power and biological weapons? There's a lot of, there's a lot of risk of the technology. And that's just in like the sort of national security space, not to mention things like job dislocation. I think my takeaway from this fight between anthropic and the Pentagon
Is that these decisions are too important to be left up to any one of these e...
ground, right? For profit companies or the government deciding on its own. I think like this,
“we all have a stake in this world that we're living in, not just on some of the civilian uses,”
but even military ones. All right. So, okay. We're not the ones building the killer robots, but if people build them, we're going to live in that world. You know, we do have a stake in what that looks like. And so there's, there's, you know, democratic elected representatives, all of us, your listeners, you know, have a role to play in weighing in in this debate. And if there's a silver lining of sort of this controversy we've seen in the last couple of weeks, it's what would have
been a private conversation is now happening publicly, kind of messy. A lot of personalities involved on all sides, but it's airing this issue. And then we're all sort of debatable with, what should
be these red lights here? Hold on a second. That's a good conversation to have. And I'm encouraged
that we're, we're having that discussion. Fantastic. Guys, thank you so much for, for, for joining us on this. Thank you for having me. Thank you. Thanks for the discussion. It's been great. Should I, did I take the wrong, would, should I not become her? I, yeah, my hair is still in fire. Still on fire. I'm sorry to say. It did not come in. Did it help at all that, because they were still putting it through a process that they were, they still wanted to filter
the problem of AI through international cooperation or legislative process or, you know, government incentives for that, rather than saying, look, we're at one second to doomsday.
“Somebody's got to step in. I think I was kind of calmed by the idea that, like, we have”
these models for, like, other sort of disarmament that have worked, like what you said about, like, nuclear weapons, like the nuclear arm seals, but also, like, you know, the Iran deal, we heard it. The one he used was biological weapons that came out. Yeah, exactly. Like, I think that that was encouraged to think, like, we have these frameworks that we could look at as models. And, like, this isn't totally uncharted territory. And then I think I'm just reminded that we're not doing
that. So that's where the nerves come back in. I also think freaking people out too much is not conducive to getting them to act as we've seen with climate change. I think it's really hampered people's ability to organize. So I did appreciate that. I also really appreciated, this is just a personal thing. But over the weekend, I did notice a lot of people framing it, though, because the good guys, which I thought was really considering all the reporting coming out about these Iran
strikes about the Maduro capture. That's already been used. Yeah. And I really appreciated just that we had someone who's worked at one of these companies, like, breaking down that it's not a binary, that there's so many considerations for these people to make. And as you've said, they're not, you know, perfect actors. Everyone makes mistakes. The technology itself makes mistakes. So I just appreciated that nuance. I also like that what they talked about was, you know, in terms of the usage,
it really is in some ways a kind of cousin of the way that we use it. And that it's just collating data more quickly and spitting out those pleasantly formatted, you know, that's a
“not make me feel better. Here's five great places you could bomb. But, John, how do you use AI?”
Oh, like, I'll go in, uh, AI and be like, okay, I want to find the best, like, who's got the best pizza in Bob about, like, generally I use it for, uh, like, those types of recreational, like, I want to try this sport, you know, what's the stuff I might need, how it would be hard to get into it, like, that sort of shit. And it's, it's effective. You know, here's five places you could go to get started with, you know, paddle tenet, you know,
that. And then the government asks, what's the best pizza and then bombs those places? And I don't
just have to make you feel better, you know? No, but here's, so here's why though, here's, here's what I'm
going to say. So in the same way that I look at autonomous cars as, like, dystopian, almost everything I've read about it is that it would make it safer. That human error is actually at a higher fraction than the other. Now, obviously, letting it just make decisions on its own without, any kind of interaction makes me uncomfortable. But I guess the point is, like, how great are we, actually? Not driving, not good. Because we bomb, we bomb shit randomly before computers ever happened,
like, how, what was our track record on bombing? Like, not so fucking great. Like, we dropped two atomic weapons on Japan. Would the computer do worse than that? Like, that's my only point is,
Like, are we elevating humanity to a higher status than we've earned?
it makes doing these things so much faster. So maybe it would have dropped five atomic bombs on
Japan. I don't know, you know, like, but if we were to look at the charts, that seems to be the way that it would go. Right. Also in the Waymo case, but there was reporting recently that people in the Philippines were intervening. You know, like, we're just not there yet. Oh, really. Okay. Yeah, I didn't know that. Yeah. I'm assuming that. I guess what I was saying is sometimes in the battle between man and machine. We tend to look at man a little bit more favorably. And maybe then maybe
man has, has earned, but I absolutely get that. And again, to that point, one of my biggest fears about AI continues to be what appear to be the pathological personalities of the people that run those
companies. Same. Yeah. I was thinking about that in terms of the, the attitudes and the personalities
“of these chatbots when you were talking about that in the conversation. And just remember, like six”
months ago, though, Grak, or whatever company, you know, is above Grak for Elon. Yeah. Made a contracts with the government for like 42 cents for like a year and a half. They could integrate Grak into government. And apparently there's like posters around DOD with Peg Seth's, you know, AI generated mugs saying we want you to use AI. Like, they really want to get government hooks on, you know, they're product. And I just imagine like someone in government being like to
Jillian's point a little bit. Like, okay, there's flooding in Texas. What do we do? And they're like, well, Hitler is the best person to do with this, you know? You're thinking that they contracted with mechah Hitler as opposed to just normal gruff. No, you're right. And those guys manipulate algorithms and they are ideologues. They have a lot of them are transhumanist. Like, they are leading us
“down a path that is, that is not favorable, I think. Yeah, when Sarah said, I don't know the”
personality of Maven. Like, I, my stomach. Oh, my God. We can't be talking about the personality of weapons. That's so dark. Or when she talked about Sam Altman's heart in mind, I was like, does he have either of these things? Right. Parts or minds. But it is like, I don't know the personality of the palentier, generated wall, autonomy system, some wild shit man and not going away. But I loved how measured they were and I loved how they sort of helped us through there.
Brittany, what, what do the people have for us this week? Sure. John, we're still going to get Greenland, right? Oh, I think we already have it. We've already won. Like, everything else in the Trump administration, we've already, not only it's like with the Iran War, we've won and we're doing more. It's, we are Schrodinger's country. We exist in all different. We have Greenland and don't have it at the same time, but they respect our unique and, you know, unparalleled power.
And so, absolutely, we have it and don't have it. And could do whatever we want with it and won't because of our largest and, I don't know. It's like how the Iran War is almost complete, but also could go on for as long as it takes. We live in this middle space. Yeah.
Almost complete and never done. Yeah. And we are going to only stop at unconditional surrender
and we've already stopped. We are, we are Schrodinger's country and it is only the beholder that determines where we are on the existence play. We obliterate the nuclear program, but they're one
“day away from it. You know, hard to keep up, guys. Very hard to keep up. Is that it for them?”
One more. One more. John, why does everyone ask you where to get pizza? Because I am considered one of the world's leading and this is recognized around the world that any, any of the larger pizza can glamour. It's the pizza that they recognize. Now, you know what? I think because of that rant, I did on deep dish pizza in Chicago. I think that's the only, oh, and we did something on Trump eating it with a knife and fork. And so, those two things, you know, there is no real
accreditation other than the port noise rating system for pizza. So, oftentimes, non-experts are elevated to that position. I mean, little do they know. You're just asking the AI. I was about to say your view is earlier. Can I tell you the truth? Like, my world there is so small. I go to Joe's on car mines if I want to slice and I go to John's on bleaker if I want to pie. And that's kind of my, like, as you guys know me, my world is small. I am not like, I am not a man
Who is out there.
I eat the same lunch every day when I go in to work at the Daily Show. And I've done it since
I've been back the exact same lunch. Well, what is it? I'm embarrassed to say. Oh, no, God. Yeah, I have to. What is it, like, girl lunch? What's a girl lunch? What's a girl lunch? A girl dinner, just like little bits of everything. I'm just very curious. Trying to prompt you.
“Yeah, we are not ending the podcast until you tell us. That's what a girl lunch is.”
Is a little bits of everything? All right. Yeah, you don't have to really cook. Yeah, no, I order. I don't make it. Let me, let me just be very clear when I go to work at the Daily Show. I don't, I don't cook. Yeah. I call out. And I get a bean and cheese toastada. Okay, we have to stop talking about lunch drinks recording. With all that set up, it's a, I know that that was a bit of a let down and I should probably be more
particular like I get every day the same thing. A quarter of a lime, spritz, lightly on steamed cod. It's a bean and cheese to stada. And the only difference is it comes with jalapeno and I generally say no jalapeno. And I've done it every time for three years. It's very generous for Aniston of you. Is it really does she get it to, she's, to stada lady? Well, not to stada. She looked like a to stada lady. But a part of lady. When they were doing friends, she would eat the same lunch every day.
Oh, is that true? Now, what did she get? It was like a chef salad. I actually know exactly what it is and I'm not going to cook. Because I don't want to look at crazy crazy. I am the Jennifer Aniston of
“late night. I think people have always. But you guys know that. But on my 50th birthday, the daily show”
bought me. We had, you know, one of those staff, you know, all hands meetings down in the studio. And they had a box sitting on a table and I opened the box and I pulled out. It was a t-shirt, a long johns shirt, khaki pants, hiking boots. And I think, and it was exactly what I was wearing that day. And I was flattered and humiliated all in the same moment. But I, I am a creature of very lame habits. But I hope, man, what, what information they got today? Very, very, very nice.
Lovely, a lovely program, thrilling and chilling and nerve-wracking and all those different things. Brittany, how do they keep in touch with us? Uh, Twitter, we are a weekly show pot,
“Instagram, Threads, TikTok, blue sky. We are with the show podcast. And you can like,”
subscribe and comment on our YouTube channel, the weekly show with John Stewart. Beautiful.
As always, guys, thank you guys so much for the incredible preparation you did on this episode.
Lee producer, Lauren Walker, producer, Brittany, my metavit, producer, Jillian Spear. Video editor and engineer Rob Vatolo, audio editor and engineer Nicole Boyce. And our executive producer is Chris McShane and Katie Gray. We will see you next time. The weekly show with John Stewart is a comedy central podcast that is produced by Paramount Audio and Busboy Productions.
Same as you, the school of the school, just like the rats and then the head of the building. Hmm, you mean, you can't do anything? Yeah, exactly. This building is a building that is really different. A building, a job or a building. Great, I don't want to be scared. Steuern-elle-edict? Save? With visor steuern. I'm Charisa and my experience in all entrepreneurs
started a choppy fight at full-time. I'll tell you when the choppy fight is already on the first day.
And the fight will make me no problem. I have many problems, but the fight is not a step away. I have the feeling that the choppy fight will continue to continue. All it is super easy, integrates and delivers. And the time and the money that I can't invest in that. For all of them, in Waxtung. Now let's test the choppy fight.de.


