Thanks for listening to AdLots follow the show on Amazon music for more futur...
Bloomberg audio studios podcast radio news
“Hello and welcome to another episode of the AdLots podcast time Joe wasn't all and I'm Tracy all the way”
So Tracy we're recording this March 24th and of course almost all of our episodes lately have been about the war and around But what's interesting or what's a little weird is that just prior to the war literally days or maybe hours The biggest story in the world was actually about defense and you know the DOD that's right So you are referring to and throw up and yeah, and it's disagreement to put it mildly the department of war Yeah, exactly this was the biggest story going right up on the eve of the war and ran and of course
Obviously there was this contract and anthropic technology was used by the defense department
So it's not a disagreement about the use of AI per se in war
But the question of the degree of to which AI could be used for autonomous weapons systems on their own Without a human in the loop and surveillance. This is another key element, but you're right So we've heard this expression autonomous weapons pop up more and more especially in recent days And I've a lot of questions over what exactly that means because my impression is the US military Certainly has been using AI for some time. Yes, and so we're really talking about
degrees here of autonomy right and so yes
“If you think about an autonomous weapon, I think your mind could go fully terminator and you know”
There's like a murder robot out there that's making its own decisions on which people or places to target and then you get levels below that right
Where AI is kind of helping humans to come up with strategic decisions. Right So if there's a missile coming and you have a missile defense system I don't think you want a human in the loop is like, okay, here are the coordinate to XYZ that we think it's gonna hit at this point in time We think the missile will be here. Are you cool with firing it? I think everyone's probably okay with that level of autonomy But I have a feeling that to your point exactly a lot of this discussion and maybe it's core to what anthropic
The Department of Defense we're disagreeing on I have a feeling a lot of this is gonna come down to definitions My guess is that there is not one shared agreement of this is an autonomous weapon system and this one is not Absolutely and of course there are also questions over exactly how
“Places like the Department of Defense not only how they define it, but once they have those definitions”
Whether or not certain companies trust them. Yeah, totally stick to those policies because the US will say well Your policy is not to survey our citizens at the moment, so if you're anthropic You don't need to worry about that clearly anthropic feels otherwise or say they do So they're all these really interesting thematic questions that pop up from all of this totally And then there's the question of okay, here's a technology and the government says we believe that we can use this to make the
Country safer what you're not gonna let us do it like private corporation There's some very interesting questions about the role of corporate power vis-a-vis the government and to forth anyway This is something that has become even more timely and the reports even in the early days of the Iran war about These AI systems having been used perhaps in target selection but we don't really know none of the reporting is like that clear I don't think that they're going out and advertising
This strike is exactly how we're using it. This is exactly how we're using AI and so forth But this is obviously huge debate and war aside. It's totally going to grow and just as an AI is going to grow It seems in so many different areas anyway. I'm really excited to say we're gonna do have the perfect guess Someone who's been writing and thinking about this stuff for a long time when we talked to an AI expert I marked the dividing line. It was like where you talk about AI prior to and Chad Gbt was released
This is like I take a little bit more seriously the people who are in this space prior to November 2022 anyway Very excited to say we're gonna be speaking with Paul Sharri. He's the executive vice president at the Center for a new American security And he's the author of two books related to this one is the most recent four battlegrounds power in the age of artificial intelligence And then prior to that he is the author of army of none autonomous weapons and the future of war He was previously in the office of the secretary of defense
Also previous army ranger, so truly the perfect guess. So Paul, thank you so much for coming on outlots Oh Thank you for having me very excited to be here. What are you we start?
I mentioned I had a feeling that maybe the definition of an autonomous weapon...
But if I say to you what's an autonomous weapon? What's an autonomous weapon?
“So I think you're right from the beginning that there is not a unified definition that everyone agrees on the defense”
This apartment has their definition. It's written in their policy. I think conceptually the distinction really is a weapon that is Choosing its own targets on the battlefield and it's not what we are today. Right now today. People are choosing those targets But it is kind of a spectrum because we do have examples of weapons that have some measure of autonomy A good analogy might be self-driving cars where conceptually like okay A self-driving car would be with an AI is driving the car, but they didn't do an actual car today and
a lot of them have intelligent cruise control Automatic braking
Automated self-parking they have all these like automated features that are kind of creeping you in this direction
Where the AI is taking over more and more control for what the vehicle can do and it's actually pretty similar thing in the military space as well
“So Joe mentioned that had we been having this conversation even a month ago now”
It probably would have had fewer concrete examples of AI enabled weaponry. Let's say or strategy When the Pentagon talks about its advanced AI tools that it's deploying for the Iran conflict What are some examples that you're seeing right now that are different to say maybe just a year ago when we had another Iran conflict Right so there's a couple ways in which the Pentagon is using AI right now One is narrow AI systems that are around for over a decade now that do image classification for example
So this was the military's original project Maven almost a decade ago where they took machine learning image classifiers Decift through drone video feeds and satellite images to identify objects Okay, here's a building. Here's a person. Here's a vehicle. That's pretty mature technology Now what's come out in just the last couple weeks. That's really quite interesting is that in the midst of this huge Messy public breakup between inthropic and the Pentagon
We found out that in fact in phropics AI tools are being used by the mismilitary to help plan the war against Iran That's obviously a different kind of AI tool AI Layered large language models AI being used to write code AI agents and that's being used in a different way It's really helping intel analysts shift him just the massive amount of data that the U.S. military has And so you can imagine the problem that the military's facing right now when they're looking at targets in Iran
U.S. military's flown over Six thousand soldies against Iran the Iranian military architecture is degraded in a lot of ways You're supposed to sort of bond a lot of targets There are mobile targets, senior Iranian commanders, mobile missile launchers and air defense systems And drone launchers
U.S. military's got to bring all that information together and find out where are these targets right now And where is there an aircraft that is the right bombs on it to take these targets out
“And that's how AI is being used to help basically process and understand all that information”
When I think about the description that you gave for that I sometimes think like could it be that I don't think that using anthropics technology means So they go into cloud.ai and say give us a list of suitable targets for sorties But it could be could it be something like that? I'm sure there's a different interface and so forth
But is that a completely ridiculous way of essentially framing the service that AI is providing right now? So the way that these AI tools are being integrated are through a existing system called a Maven smart system, which is built by Palantir and that fuses all this data together
So you basically have an existing architecture for data management for Intel analysts
That the military has that brings together all these different forms of data. You might have satellite imagery, Geo location data, signals intelligence, other forms of information That's pretty great for Intel analysts, but that's also really unwieldy because how does a human Understand all that data and process it and that's where the large language model tools whether it's cloud or other companies can be valuable Is that can be a way for a human to interact with that data to basically task a large language model to say okay
Here's a bunch of data. I'm giving you I want you to look for intersections and things Right. I want you to look for a place where we have satellite imagery and some other forms of Intelligence that can help identify the location of you know some missile launcher for example and then humans can look at that
Help one just like find where are all these targets and then it's helpful in ...
Here's this list of potential targets that I have now there's get a little bit run runs a really big country
“I want to map these two locations for U.S. aircraft at different bases across the region”
What are available aircraft and what are available munitions on those aircraft that can be used to take out those targets to help build a strike package So like the AI is definitely being used to help understand the battle space into plan operations But in I would say ways that are pretty narrowly directed by people It's not quite as simple as like don't roll this data into a context wonderful LLM and then say oh, AI figure it out Okay, people are asking the AI some really specific questions. So I'm thinking how to phrase this question
Diplomatically but I get that the difference between fully autonomous weapons is you know the human as a decision maker
In the current setup how meaningful
Is the human actually like what's your sense of it because I'm imagining if you're an intelligence officer and you're getting Reams and reams of data from Iran and you ask the AI to pick out certain patterns or identify potential strategic targets How much do diligence are you actually doing on what that model spits out because of course the tendency
When a lot of people use LLM certainly is just you accept what it shows you on the screen and just to add on to Trace these questions because there's a round on to go which is then the early days of the war we hit that school and I'm reading a New York Times report and that was a result of quote outdated data Provided by the defense intelligence agency now we don't know exactly what that means But okay various outputs come out then what happens like how much is the human layer currently in terms of okay
Here are targets here are ships that are on a battleship this could be plausible What do you think or what do you know about the level of human decision making that happens between some Output and then the ultimate call for a strike on whatever it is
“Yeah, I think first of all, I think it's a really important question because it is one of the possible failure modes if you all they are”
Yeah, and how were you because you could end up in a place where humans are Namely in the loop and you could say well it's not an autonomous weapon humans making these decisions But if the human is not meaningfully engaged and they're just kind of rubber stamping some kind of decision and that's not really what we're looking for so I think that's a it's been a long-standing concern for many Years about people worried about autonomous weapons and I think that's a very real risk with how AI is used
Now based on my understanding of how the AI technology is used in Made in today and based on what I've seen of demonstrations of it because I have seen some demonstrations of this in action I think humans are pretty involved right now in terms of actually looking at the output from AI giving pretty specific guidance to the AI systems I do think there is an underline challenge that the strike on the school highlights which is when you're talking about
Thousands and thousands of targets What's the degree of vetting that's going into all of that information both in the run up to the war Which in this case that school was a fixed object and so that's likely something that should have clearly been much more vetted prior to The war kicking off that someone could have identified that that building that was struck Had actually been at one point in time part of an Iranian military compound
But you could see based on Public available satellite imagery that it would move out of that compound some time ago and it'd been converted to a school
And it would appear based on what's been reported in the times that that information had never been updated in this
DIA targeting database now I would hope that we'll get more information in the future in some investigation about exactly where that went wrong
“But I think that does speak to this underline challenge of how good is the data going into this AI system and”
How thoroughly are people vetting it and again in principle AI might be able to help you with those things But you gotta use it the right way and people still have to be meanfully geared to this Hello, I'm Stephen Carroll. I'm in Brussels for many of Europe's biggest decisions get made and I'm Caroline Hepp get in London with the hosts of the blue bag day break your podcast. We're up early every week Day keeping an eye on what's happening across Europe and around the world
We do it early so the news is fresh not recycled and so you know what actually matters as the day gets going From Brussels, I'm following the politics policy and the people shaping the European Union right now And from London, I'm looking at what all that means for markets money and the wider economy
We've got reporters across Europe and around the globe feeding in as stories ...
So whether it's geopolitics energy, tackle markets, you're hearing it while it happens. It's smart, calm, and to the point
“And it fits into your morning you can find new episodes of the Bloomberg day break”
Europe podcast by 7 a.m. in Dublin or 8 a.m. in Brussels Berlin and Paris on Apple Spotify YouTube or wherever you get your podcasts
Why don't we back up for a second tell us about the work that you've done in this area really several
years ahead of the curve and talk about this stuff planning for this stuff give us a little bit of sort of Your background in what got you on this train again several years before a chat should be tea Yeah, so really over a decade ago now around say 2011 I let an effort inside the Pentagon when I worked at the office The Secretary of Defense on developing the Pentagon's policy on the rule of autonomy in weapons When it's still in effect today in fact and that was really part of at the time we worked at all
Where the military is now in terms of integrating AI tools I mean these types of lawyers language models just did exist at the time But the military had kind of woken up to what I would call this accidental robotics revolution
During the wars in the rocket Afghanistan where the military deployed thousands of air and ground robots
The drones in the air and ground robots for diffusing bombs and The military was starting to think that well, where is this going in the future and one of the things that everyone could see would be valuable will be having more autonomy in these systems the ability to not be toward the reliant on A human remotely controlling them which was really the case at the time But that raised all these obviously the only questions about like well, how much autonomy should they have and what are the legal and ethical
implications of that and that was actually the topic of a lot of discussion among people in the military at the time and In the Pentagon for people working on these issues and so that ultimately led to that policy directive that's still in place on the rule of autonomy in weapons And then when I left the government I get too new to work on this topic is
We've seen discussions internationally through the regulations as we've seen the technology evolve and really amazing
ways but also ones that have risks with artificial intelligence
“So when you were doing that job, I get that you're on the policy side, but did you ever see anything on the contractor side?”
Similar to what we're seeing with anthropic right now like was there ever a contractor who said actually no I'm really uncomfortable with the way that the department wants to use this particular tech Not at that time now of the few years later after the US military launched project maven There was a big dust up when it came up publicly that Google had been a part of project maven and number of Google employees Signed in a lot of protesting that and Google eventually discontinued their work on project maven and you know
It's not the exact Rapplica here of what's going on, but there's certainly some similarities in terms of a Disconnect between how some people in the AI community are thinking about how their technology ought to be used in more
“And how the military is thinking about it and I think part of that like this is underlying challenge of”
AI is really different than a lot of traditional military technologies because it's coming out of the commercial sector So the way it's kind of like the opposite of stealth technology that was invented in secret defense labs and doesn't have a lot of commercial applications AI is all of these different applications, it's not being invented by the military the motor's having to import it in and There are a lot of debates about how a AI should be used, you know, in the military and more broadly Insight
Actually on that note, I think this is really interesting and definitely a pivotal point in I guess the history of the military industrial complex But why can't the US government with all its resources actually develop AI in house and just avoid the seeming complication of having to deal with a commercial enterprise Part of the doesn't have the technical skills the AI scientists and engineers are Really and there's a fierce competition for talent in the AI space and so the military just like can't
Buy that talent they don't have it and the government spends a lot of money hundreds of billions of dollars annually on defense But we've seen actually in the last few years that private enterprise is able to mobile it massive amounts of capital Towards building data centers to training AI models and part of it because the Commercial applications for the technology are much bigger than the defense applications and so for a lot of these tech companies There's some
At least maybe not in this particular instance, but in the past there could often be some prestige you associate it was saying oh, you know the Air Force uses our AI system or the Navy uses our technology
The defense sector is actually kind of small for them as a customer
I mean the dollar amount that's been talked about publicly for the anthropic contract
200 million dollars. That's not all the money for these AI companies. Yeah, and so I think that actually
We've seen the defense sectors as struggled to just keep pace with the amount of investment that's needed in this space
“seriously, I think it's a good question and then you remember well the government couldn't build a good health care website to sign up for health insurance”
And I hate to bring that up because it's old and but it's true, right so it's like are they going to build a world class LOM or Bit can a government build a good employment insurance website the trip we've done multiple episodes Answer continues to be not the case. I do find it fascinating however your point about there is this novelty It is impossible to imagine say Lockheed Martin Inventing a technology and then I'd say no, you can't use it because Lockheed Martin's entire
Race on death, right is building technology for the government. It isn't conceivable what that would be
But it is sort of novel when you're getting these defense technologies and you know the Google was also an example of Google obviously had technology that Did not originally serve a purpose of defense. We saw the we remember the employee revolt Let's talk more about that disagreement though between an anthropic and the department of defense in your mind Where does Peter heads headset want to go with this technology and is that deviate from some of the policies and the Directives that you were working on when you were when you were working on this stuff
So what's kind of crazy about this whole dispute is particularly an issue of autonomous weapons Literally everyone I've spoken with has said that there's no intention by the military to use AI to make fully autonomous weapons today Okay, and anybody that's actually worked with the large language model then you kind of chatbot whether it's clawed or Gemini or chat GPT
“Know that if you use these started email you need to double check it like it no way shape reform. Are they reliable enough to make a life in death decisions?”
I don't think the most are actually wants to do that What's at dispute here is a more fundamental disagreement about well who sets the rules? And so the origins of this really was that when the Pentagon came out with a new strategy for AI in January One of the things in their strategy was that going forward they wanted their contracts with AI companies to allow the military to use their
Actuals for any lawful use basically look anything. Let's let go. We want the ability to do it
And that has conflicted with how a lot of these tech companies have been thinking about their AI tools They're very nervous many these companies about harms from AI the conscious of this risk And so a lot of them have various use policies in place. You can't use AI You know the launch offensive cyber attacks for example, that's kind of thing that actually like the government might want to do
“And so this is a this was really the rub with the government was like who sets the rules?”
Rather than necessarily like a near term question of fully autonomous weapons So what we've already seen is anthropic has this disagreement with the government and then open AI Steps in and raises its hand and says okay anthropic doesn't want to do it. We'll do it happily Does this just leave us in a situation where it's sort of a race to the bottom, right? It's like the lab with maybe the least amount of safety concern or the least amount of reputation all concern
Is able to do this and so we still wind up in a situation where the government is using AI Well, and I think what's unfortunate here is that when you think it would be optimal for the government one I think it would be ideal for the government I have access to this technology and then access to all of the best in class models available because they are good It's slightly different things sometimes and it's much healthier for the government Do have access to a number of different providers so that there is healthy competition and the market
You don't get locking with one vendor But also if the AI scientists are saying hey, it's not reliable for this You don't listen like that seems like a thing you want to hear them like out about right and I think like in order to use AI in ways that actually are effective for the U.S. military We're gonna have a healthy dialogue between the AI community and people in a military profession
What the technology can and cannot do and I think it's unfortunate that we've seen that they'll break down in such a dramatic way over the speech Just going back to the idea of who actually makes the rules You mentioned earlier that you know, you can't use Claude to hack into to illegally hack into a system supposedly it is unable to do that
It has like a kill switch within itself that prevents it from doing that if you're on the topic Could you not just hard code some of these systems and say you're not gonna be able to use be used for Domestic surveillance of Americans or for war crimes
Yeah, so this is where it gets a little more technical it has to do with some...
The company would be providing their technology to the government
So there's a couple different ways in which an AI company could put safeguards in place to make sure that their Models not being abused One is training the model itself to refuse certain requests So if you ask the model to do something it's just gonna say like I'm not gonna do that That's not consistent with the guidance that I've been given and the model's been trained to give you that response
Another way is that the company can put Classifiers on the input and or the output of a model where the model might give you an answer But then there's like another AI system that's checking that answer or checking what you ask of it and so they won't like that's unacceptable And then authority and I've wanted to that actually myself in my own research because like the nature of the things that I work on our security things And I've had such a wishes were I asked Claude help me understand this issue Claude has to generate response and then it gets deleted
“Yeah, yeah, I think it's really interesting to see and then the other way and then proper because I actually talked about this in response to countering”
some use of Claude by Chinese hackers who were using it for cyber attacks is that the company monitors use Through that people are doing and so people are doing things that seem suspicious maybe they're logging in from an IP address That's known to be associated with cyber criminals or a hacking group that Try to find ways to get around some of these protections the company can also find ways to try to catch that
And so there's a couple different ways to do it that might not all be in place if you're thinking about Military use where if depending on how that relationship is structured between the company If the model is for example, the coasted on a different cloud infrastructure or the militaries direct access to it The company may not have the same ways to actually shape whether or not the technologies being used according to other principles Which is partly why the contract details do matter of like what is the agreement between the company the government about what the military can again
I use the technology The news doesn't stop on the weekends context changes constantly and now Bloomberg is the place to stay on top of it all Hi, I'm David Gurra join us every Saturday and Sunday for the new Bloomberg this weekend I'm Christina Rafini will bring you the latest headlines in depth analysis and big interviews all the stories that hit home on your days off And I'm Lisa Mateo watch and listen to Bloomberg this weekend for thoughtful enlightening conversations about business lifestyle people and culture on Saturday mornings
We put the past weeks events into context examining what happened in the markets and the world That on Sunday as we speak with journalist columnists and key political figures to prepare you for the week ahead Join us as soon as you wake up and bring us with you wherever you're weekend plans take you Watch us on Bloomberg television listen on Bloomberg radio stream the show live on the Bloomberg business app or listen to the podcast That's Bloomberg this weekend Saturdays and Sundays starting at 7 am Eastern make us part of your weekend routine on Bloomberg television radio and wherever you get your podcasts
“Trace, I think your point about like this sort of seemingly safety or safety race at the bottom is a very real and it's one that I think about a lot when”
LLMs or AI was basically just synonymous with open AI they could set the pace of development right they could do it as soon as this became a hyper competitive space
Where you have open AI and you've anthropic and you have Gemini and a thousand open source AI models out of China, et cetera The temple of release is really heightened and the degree to which it feels like they have no choice but to accelerate Just for the commercial imperative feels like a very real dynamic in which like I don't know where that leaves AI safety Well, totally and also you mentioned China then it's not just domestic competition You know open at AI versus anthropic it's competition between international actors where it's like okay
Well, the US might want to have safeguards on technology or say that it does but maybe Russia or China. Yeah, yeah, I'm care
Right, tell you you know, it's funny while you mention where you like see the output for one second
Then delete like when deep-seek came out as doing some experiments about like figuring out its censorship And I was trying to do some adversarial prompting and I was like Historians like to talk about a period in the 20th century where it failed attempt at extreme rapid industrialization happened and it led to famine
“And then you see the output and it said okay, what happened in the 20th century where did this famine?”
Well, there's something called the Great Leap Forward and that immediately as soon as the chain of thought
Hit the Great Leap Forward it just like disappeared.
Anyway, we've been talking about quote large language models
But actually AI is beyond large language models including the image stuff and that actually LLMs at this point
“It's a very 20 23 sort of term and when and I think this is important because when we get to the intersection of”
AI and robotics and so forth and or AI and target we're talking about something It would be on largely language model, but we might still be talking about generative AI Where do you see this going and what are the weapon systems that aren't currently used? You said currently no one is actually talking about True autonomous weapons, but if that were the case then it wouldn't be a controversy though
So there is clearly something just beyond the horizon that could come into the picture of a true autonomous weapon system
Or the technology is building towards that if this weren't the case
It would be no dispute you wouldn't have two books written about the subject So what are these weapon systems that would classify as autonomous weapons that today that the technology is building towards right now Yeah, I mean, I certainly think that trends are taking us there and one of the things that you see in the Pentagon's position in this history for example Is they want to preserve that option for there's something not interested in tying their hands
“I think you can see that evolving in a couple ways”
One trend we're clearly seeing with the largest and most capable AI systems is they're increasingly multi-modal They bring in lots of different forms of data of course and they're increasing the general purpose They can just like do a variety of different kinds of things They become more capable of that and so that's like one way in which you could see AI being used in ways that might sort of slowly pull humans out of the loop
Where instead of person giving an AI like really narrow tasks to do in a planning process Maybe the AI is able to take on more bring in more data Take on more like sophisticated longer-term tasks and we're certainly seeing this in other areas like coding Where the task length that an AI system could do is growing exponentially over time another Sort of way that we might see this look is just we see a network of AI agents that are interacting with different pieces of data
Doing different types of things and the the net effective that is the maybe humans are against sort of like Namely looking at these targets, but not actually approving them in some meaningful way and then there's like a more separate I would almost think of like an embodied form of AI and robotics, yeah, which could be a drone or Munition or a botic system that has some kind of onboard autonomy that might be
Partly a distilled model so that it can be operating at the edge on lower computing on this actual Munition or drone or it might be some hybrid system that has partly machine learning But also just a lot of hand-coded code that's more like expert level system that's going out into the battle space and Hunting targets directly and attacking them something kind of like the low-cost drones that we're seeing Iran launch, but once the conloiter and identify targets that attack them when that doesn't exist today
We don't have drones Later that are just hanging out there and then when something flat then there's a system is like this looks like a target Detects that actually doesn't exist currently as far as you know well I mean they're not the nothing wide spread you so there have been some narrow examples I would say historically dating back to the 80s in fact of
Lordering munitions that could search over wider and would queue off of radars It's a radars are what the military would call a cooperative target that when they're Emitting in the electromagnetic spectrum if you know the signature of the radar you're looking for You can see it you can just hold in on that radar. Okay, now if they turn off it's different Then kid in and they're harder to find but there have been some examples a
System that the U.S. Navy had in the 80s called the Tamahawk anti-shit missile not actually the same time lock Cruise missile that the military's using now a different one that was designed to fly a search pattern and hunt Soviet ships There was an Israeli system called the harpy drone That was designed to go after radars that would lower their for a period of time But these loading munitions have not really been in widespread use by military's. We got it invent one of those like high pitched alarms to deter
The loitering drones from hanging out outside targets. I guess I mean we have electrical jam. Yeah, that's a success
“Okay, so when I think about as we move towards more autonomous weaponry”
I think about bots basically interacting with bots at that point and then I think back to previous examples of bots interacting with bots
And there are numerous ones where things tend to go off the rails They just start debating the meaning of life or they start talking in like a language that no one understands except them stuff like that Does the possibility of Undesired escalation go up the more we move towards fully autonomous weaponry
I think that is a very serious risk and so the like mental model that I have ...
Due to the interactions of different algorithms that are executing trades
Where you get these emerging properties of how the algorithms might interact in the market and it's a competitive environment Companies aren't going to share the details of what their algorithms are doing and you can just strange behaviors Now the way that financial markets have dealt with this problem is regulators have installed circuit breakers that takes Docs offline The price moves too quickly There's no referee to call time out in war and so I think that's like particularly in cyberspace
One kid in vision of future where that is a risk where things are happening at machine speed and you have
“Autonomous offensive cyber operations you need to defend against that you need some measure of”
Autonomy on the defensive side to defend it machine speed and you could get situations where you get weird interactions That might escalate the conflict or could also happen between drones interacting in some kind of crisis situation now situation where like there's a big shooting war in the way people are already attacking It might be less of a concern although you still could worry about escalation Geographically against bringing new countries into a conflict or maybe talking
really sensitive sites that are tied to nuclear Communic patrol that you rather not go after so I think that's a very real risk when we think about how this technology might be employed what about AI in Really difficult ethical questions strikes where we know that civilians for example are going to be killed Which that happens all the time and war and presumably tries to be minimized
But war planners will find some Level of acceptable they call collateral damage is AI playing a role or do you expect it to play a role in some of these Strikes that may be gray areas I Think you can envision ways that AI would be used that would make warfare more precise and more human and ethical and ways that it could be used That would not and we can be opposite so for example if you had an AI system that could look over all this targeting data and
Then identify if a strike is within a certain distance using
You know musicians of a certain size of protected targets whether it's schools or hospitals or critical
Civilian infrastructure and say hey, well, like warning here
“You should not carry out the strike or it needs a higher level of approval or”
Maybe you should use smaller more precise munitions that would be a really beneficial use of AI particularly when you're talking about a Military campaign that hits a lot of targets in a short period of time that could be really valuable and may reduce Civilian casualties, you know, and a risk of all of this is you can end up in a world where humans are just less Engaged in the process, right, and so there's both mistakes that humans miss or humans just don't feel as morally responsible Which I think is like a really
tricky thing to think about morally because On the one hand as a democratic society We make a decision as a nation to go or war it's very small number of people that have to carry that burden and And if someone if you can say well look what's the benefit to someone having like PTSD Years after a conflict that they're haunted by something that happened that doesn't seem great
Maybe we could reduce that on the other hand if we thought of war and nobody felt morally responsible for the killing that occurred That doesn't seem good either and I could lead to more suffering and civilian casualties in war
“So I think that's a certainly a concern when we think about how to use the technology”
Yeah, this is very Ender's game coded right where you have someone who's basically playing like a video game and wiping out entire Civilizations
And they think it's just a video game just an exercise, but it turns out it's actual warfare and we're seeing some degree of that in the way that The Department of War is portraying this conflict so far. It's very video game Yes, especially the public presentation Yeah, literal Animated gifts of video game exactly so Paul you mentioned something you mentioned the word circuit breaker and
Circuit breakers are nice things to have in markets I think they'd be even nicer things to have in armed conflict and war is there any possibility that you could design something like that for a major conflict I think it's possible like an Tactical level to figure out how you would do that and where you put protections on your side in the military and what you would do with Even me to cooperatively with the enemy the challenge is how you avoid
Well, we were talking about earlier erased the bottom on safety yeah, and right we're screwing this in the private sector between the AI companies is the Russian to get products out to market I think it's especially hard in the military space where Countries are investing in the military because they're worried about what some other adversary might do and then want to get a leg up on them
So it's not that
Cooperation in the midst of conflict never happened. It does and
Countries have agreed to take certain weapons off the table chemical and biological weapons for example It doesn't mean they're never used, but most of the last countries that we're not going to use them But those examples are pretty rare and it's pretty hard to do and so the that dynamic is the really challenging When it's like how do you find ways to cooperate with your enemies to avoid some of the biggest dangers you So I think there's the last question for me, you know, you mentioned that drones are kind of robot and there are other robots that have been
Existence in either national security or police work for a while I think they're robots on the subway sometimes that seem to be really yeah, but they don't think they really do The robots at the grocery store and they end up like chasing me while I'm trying to file a rare answer something Yeah, the ones that sweep the floors and stuff. Yeah, and there's the robots. Yeah I think there is a Eric Adams did a contract with some company. They were doing like subway
“Of course, that's what I like that, but these are really different AI as we talk about it and robots are two different technological trees”
But they are going to merge and there's the possibility of their more ultimate merger Do you foresee a world in which essentially we don't have human soldiers and Wars are fought with who has the most advanced autonomous robots? We know China is investing a lot in humanoid robotics. Do you foresee a world in which that is the nature of A ground invasion as you and it happens with robots or various other sorts talk to us about how far that could go
Yeah, so I mean, I look I think where we see robots use more and more for absolutely the long
arc of technology in war from the first time someone picked up a rocket through it. Somebody else has been towards greater distance between
Adressaries moving up through those arrows and rifles and intercontinental ballistic missiles I think robotics will be the next evolution of this trend of finding ways to find the enemy strike the enemy without putting yourself at risk And there's something a role for robotics out on the battlefield. I think a vision of like Future wars of just robots fighting robots. There's a human's of all. There's not realistic for a couple reasons. One is I think
Military's are going to need people relatively forward deployed to execute command and control for robotic systems The US military right now can fly drones remotely from the United States in a relatively Uncontested environment against more sophisticated adversaries who could jam your communications like and we see for example Like a lot of jamming on the front lines in Ukraine. That's one of the ways you to go after these drones Then you need people close by because it is easier to have shorter range protected
Communications when you go to a longer distances as much as harder to do I'm so I think I need people relatively close for that reason
“I think if you want to control territory”
We have to put people there eventually to get out of a vehicle and walk around and control it But I think the other reasons maybe a little dark which I think realistically In order for wars to end there will have to be some human price that's paid I think that's an unfortunate reality that if it's just machines that are being destroyed That we may not get to the place where one side or the other is willing to sue for peace and
I think unfortunately war is likely to involve people and human costs for a very long time I have one more question as well and I guess it's a thought experiment But if we think back to sort of pivotal moments in military history and their intersection with technology
One of them that comes up is the Russian officer who decided not to press the button in response to the U.S And thereby, you know, supposedly save the world from nuclear disaster Would that happen in a fully autonomous military environment nowadays? I mean today would still happen because there's people involved right so this incident stands for a cultural Thank you is leading at the at a terminal and gets this warning that
There's a ballistic missile launched from the United States against the Soviet Union and at another missile another And it's like five missiles coming in and the thing that's interesting about this is when Petrov talked about it afterwards And we could hear what he said because we all live because you made the right decision here is he talked about how he said He had a funny feeling in his gut and that he knew that the Russian system the Russians had just deployed The Soviet rather just deployed a new satellite based early warning system to detect U.S. ICBM launches
That it was new and he knew that a lot of the Soviet technology just didn't work that great at first
So he was skeptical a bit turns out it was in fact faulty. It was detecting the reflection of sunlight off the top of clouds And the system was identifying that is as a missile launch and
“That's what it was reporting any went and then called the early warning radar stations”
It's that are using these missiles come over the horizons. No, there's no missiles. So he reported up the chain that the system was malfunctioning
I think the scary question here is like if that was an AI what would the AI h...
Yeah, it was kind of like whatever it was programmed it wherever it was trained to do and
Obviously we're seeing more general purpose AI systems like large language models have the ability to bring together more information to understand better context to have just like a More contextual understanding of the questions that you're asking of it But it still doesn't know the stakes of a conflict. It still doesn't know like in some visceral level
“What the consequences are and so I think that's a strong compelling reason why we need to have humans involved in these decisions”
Even as the AI becomes more capable There's still going to be things we want humans to do because humans understand why it matters
I started the conversation by mentioning that you know
It's not very controversial to say have an anti-missile system fire a missile when there's one coming in But that could be wrong and you want to make sure that it is in fact a missile and not a civilian air jet or something like that big so even there Where it seems like a canonical example if you just want to have the Missile system go off you would want to have human safeguards and human oversight and human
Understanding of the system such that it is in fact shooting down missile. I mean Poleshari fascinating conversation really appreciate you are coming on the oblauds and talk about the work
Thank you really enjoyed the discussion. Thanks so much Paul. That was depressing and fascinating
Yeah, I think the same time. Yeah, no, no, it was great. It was thank you so much. Yeah, thanks for coming up I Kind of get choked up at the end thinking about that decision that saved Humanity at the end and it's actually it's a crazy story It's a crazy story. It's one of those stories that like why doesn't everybody know that that person's name
I mean when you think about how I could remember it either
“But shout out to another podcast if you want to learn more about this Dan Carlin's hardcore history has at least one”
Possibly two episodes on narrow versions of nuclear disaster. So very good to listen to if not terrifying You know, there's another point in that exact story that I think is really interesting and it's something I've been thinking about a lot across AI because there's something similar about humans and AI Which is that there is definitely a gap between what we know and what we can articulate and this is certainly true with AI Right, so the bot makes some decision or it determines something
It does not mean it's going to be able to spit out in words how it arrived at this decision But that's true for humans as well, and so the idea that like okay, maybe you you do get funny feelings about Yeah, or take you know like again We're still pretty good at determining the difference between AI generated text and human generated text Can we off but often I mean we still like often couldn't get it right?
But could we write down exactly what we saw that we understood there is that gap and When we're talking about life or death decisions being made It is scary to think about that role of instinct that we can't articulate having been taken out of the decision Well, I think also technology is very good at pattern Yeah, right and responding to patterns and preset paths. It's been programmed to do certain things and I think in a war environment
That's one of the most uncertain environments that you can possibly imagine
“Yes, and so you have to think that there should be some element of flexibility in your response”
But I don't know how you actually encode that into a thing that like runs on rigid numbers and lines lines of code The other thing I was thinking was the anthropic situation and just how new that is from a sort of military history perspective in the sense that here We have this really important pivotal piece of technology that hasn't come out of like actual military demand Right to Paul's point. It's a commercial product. It's commercial uses our arguably a lot more profitable than its military ones
And so seeing that now interact with the Pentagon and the Department of War Really interesting. It's been flipped, right? Yeah, the closest example actually that comes to mind there is one Example that's in fairly recent history and that's Starlink. Oh, yeah, and of course that was developed for commercial internet purposes But it played a role in the crane and so forth and at one point if I recall there was a tension point about the degree to which Do you think that is sort of an interesting parallel here the other thing and we didn't get to this and I this is gonna be a little cynical
But I think it's right which is that there is another element. I believe to the anthropic situation
Which is like anthropic is the last big lib tech company or perceived as such...
And I don't think like anthropic is like totally part of that. I think there's still sort of lib code it
I also think it's incidentally why a bunch of people who probably have working media are like end up using cloth even though they're all kind of the same Like I do think there's something there and that they have this thing. They say we're not gonna ever have ads And we know that like in Dries and Horowitz Mark and Dries from just talked about ads are good ads So democratize the internet ads enabled the internet to be spread to everyone. There's some other politics of play because again Like it from my understanding and it would take a lawyer. It's like I don't think that the agreement that open a
I signed was that different at least then what the agreement that anthropic had is probably a little bit of difference I did think there's some other politics at play here perceptions matter, but to Paul's point Maybe nobody is talking about currently autonomous weapons right now fully autonomous, but it can't be long
“And I think this is going to be a real tension sooner rather than later. Can I say one thing and I'm gonna be slightly facetious”
But also not can you be slightly facetious you're going to be facetious, which is I have a solution to modern warfare Don't do it Okay, but for real no for real yeah if we're just gonna have bots fighting Yeah, and it's gonna cost a lot of money and result in people's deaths every country should have to build It's biggest best most technologically advanced robot and just have them fight it out gladitorial
Yeah, and my twist is to Paul's point about war always having to be painful. Yeah, in some way
Everyone in that particular society has to be engaged and dedicate some amount of time or money to building That particular robot and you just have to iterate on the robot forever until you feel comfortable to have them fight And that way everyone shares the pain but without the loss of human life and my high. I don't think so
“Well, I think you should write a book. No. I think you should write a sci-fi book. All right, so we leave it there”
Let's leave it there. This has been another episode of the All Thoughts podcast. I'm Tracy Alloway You can follow me at Tracy Alloway and I'm Jill wise and thought you can follow me at the store Follow our guest Paul Shari. He's at Paul underscore shari follow our producers Carmen Rodriguez at Carmen Arman, Deshel Bennett at Dashbot and kill Brooks at kill Brooks And from our Adelaide's content could a Bloomberg dot com slash Adelaide's with a daily newsletter and all of our episodes
And you can chat about all of these topics 24/7 in our discord discord gg slash offloads and if you enjoy All thoughts if you like it when we talk about the future of autonomous weapons Then please leave us a positive review on your favorite podcast platform and remember if you are a Bloomberg subscriber
“You can listen to all of our episodes absolutely add free all you need to do is find the Bloomberg channel on Apple podcasts and follow the instructions there”
Thanks for listening . This is Tom Keane inviting you to join us for the Bloomberg surveillance podcast It's about making you smarter every business day. I'm Paul swingy we bring you complete coverage of the U.S. market open We cover stocks bonds commodities even crypto all the information you need to excel and I'm Alexis Kristoffer's Bloomberg surveillance Also brings you the analysis behind the headlines. We do that through conversations with the smartest names and
Economics finance investment and international relations. We do all this live each in every weekday That bring you the best analysis in our daily podcast search for Bloomberg surveillance on Apple Spotify YouTube or anywhere else you listen on the East Coast Listen at lunch and on the West Coast listen as soon as you wake up That's the Bloomberg surveillance podcast with Tom Keane Paul Swiney and me Alexis Kristoffer's
Subscribe today wherever you get your podcasts Bloomberg surveillance essential listening each in every business day


