Fresh Air
Fresh Air

America's first AI-fueled war is unfolding. How'd we get here?

4h ago45:426,794 words
0:000:00

β€˜Project Maven’ is the story of how the U.S. spent a decade building an AI warfare system that's now being used in the war in Iran. Author and Bloomberg journalist Katrina Manson reveals the people be...

Transcript

EN

This is fresh air, I'm Tonya Mosley.

America's first AI field war is unfolding right now.

β€œOver the last three weeks, the US and Israel”

have launched strikes against Iran, hitting 1,000 targets, and the first 24 hours alone. Nearly double the scale of the 2003 shock in all campaign in Iraq. The system helping to enable much of this

is called the Maven Smart System, and running inside of it is clawed from the company Anthropic, an AI model that millions of people interact with every single day. On the very first day of the war, a US Tomahawk missile

struck a girl's elementary school in southern Iran, killing more than 165 people, most of them school girls.

A preliminary military investigation

found the strike likely resulted from outdated intelligence. And while the role of AI has not been confirmed, the Pentagon is still investigating whether Maven played any part. At the center of this story, is a little known Marine Colonel named Drew Kukhor,

who spent decades fighting to bring AI to the battlefield, and whose obsession has quietly changed the future of war. My guest today has been reporting on Kukhor for years and how we got here. Katrina Manson is an award-winning Bloomberg reporter

who covers cyber, emerging tech, and national security. Her new book is Project Maven, a Marine Colonel, his team, and the Dawn of AI Warfare. Katrina Manson, welcome to fresh air. Thanks for having me.

You have been reporting on this Maven smart system for a couple of years now, and now you're watching it used in a real-time war. Take us a little bit into how the Maven smart system actually works and specifically what Claude's role is inside of it.

How do those two things work together? If you imagine looking at something like Google Earth, you begin to have an idea of the display that US military operators will be looking at. Some people have described this to me as windows for war

or an operating system for war. It's essentially a digital map. What makes that map special from the US military point of view is the number of intelligence feeds that are coming into it. At one public event, it was made clear

that is more than 160 separate intelligence feeds. Now, to crunch that data, they're using digital data analytics, but they are also using a few other tools that rely on AI. There's computer vision to analyze some of the objects that are showing up on the maps

that could be potential targets, also where US forces are. And then Claude is doing something different that is not computer vision. That is an AI tool based on a large language model that can crunch data.

And what I've been told before is that Claude and LLM's inside Maven smart system helps speed processes.

β€œSo the sorts of processes you need to get sign-off on a target,”

everything short of sign-off Claude can help with. And it can also help plan courses of action, help pair weapons to targets. It can assist everything that the US military needs to do when it comes to making a decision,

short of actually making the decision.

On the very first day of the war,

this missile struck the girl's school, and there is some reporting about this case that the United States was likely responsible. There is no indication yet that AI had a role to play in here. But the coordinates they use were more than a decade out of date.

What does that specific incident tell us about some of the lapses in data keeping and potentially what could be a challenge for AI models as they move through in our use more often in war? - At herance of AI warfare regularly

β€œemphasized to me how important accountability is.”

In every war, there are bad strikes. Whether the US has prepared to investigate it and make public what has gone wrong in this case if the US is responsible will be a real test for those claims of accountability.

AI is meant to make warfare more auditable. Now, whether this is a case that the school was on a targeting list that predates AI and wasn't updated and whether AI drew from that targeting list, all of that will be important to reveal.

Any system, particularly one that uses AI,

will only ever be as good as the data that feeds it.

β€œAnd if they are drawing on a database that is old,”

the AI, if it's set up that way, can't do anything about that. And in numerous occasions, I've found examples of poor, weak, or flagrantly erroneous data that have fed systems.

If this is a US attack, it won't be the first one

against a mistaken civilian target. In 1999, the US struck the Beijing Embassy in Belgrade. And that case, the CIA came out in public and said, we had the map labeled wrong. And if a map is labeled wrong, which we don't yet know

is the final analysis of what happened here. But if that goal school was in a database, no AI can beat that unless you start using AI in other places. If Google Maps, for example, showed that it was a girl's school,

it would be quite simple to draw from that information, potentially, if there were a way to analyze other location data that might indicate

there were children in the area.

And an additional factor will be, where are the checks and balances on an old database and what role could AI play in checking work and in cross-referencing other data? If indeed, the girl's school is labeled

on something as accessible as something like Google Maps. - I want to talk about some news this week that is coming to bear because of a court case. The Pentagon blacklisted anthropic for refusing to allow claw to be used

in autonomous weapons. And within hours, open AI stepped in. They publicly then announce the same exact restrictions andthropic was punished for holding. Is that an accurate way to describe this?

- It's one way it's been described, but not in my reporting. The open AI deal, I reported is slightly different. It's not clear if it maintains exactly the same safeguards as anthropic.

β€œAnd anthropic also, of course, it's really important to frame”

really lend in to working on classified cloud for the Pentagon.

They were the first AI company to decide

to offer AI on a classified platform. And from my reporting, it is not possible for them to know every use case, every specific example of the way their AI tool is used in classified operations.

And the classified level is where America fights its wars. So that decision to lean in to what American military calls war fighting was already a very significant decision. Open AI had not taken that decision. It was not on classified cloud.

It now will be. It does seem to have allowed more open acceptance of how it's tool could be used. But I think we'll have to see because it's a very politicized divide

when you have the president calling anthropic left wing nut jobs, calling them a radical left company. Even though they were working on classified cloud, clearly there's a technical debate, there's a policy debate, but there is also a political flavor to this falling out.

Can you explain, maybe in layman's terms, how a classified cloud actually works? You're almost, if you imagine the cloud that we all use for, let's say, are email or for documents that are loaded up into the cloud,

the same can be done for military data and it can be accessed and shared. Now for a US military or for the intelligent services, they don't want that information to get hacked. And so there are a number of safeguards

that are introduced that can uphold a higher classification.

β€œSo information that the US system deems secret”

or top secret or compartmentalized information that only a few people can access, even at that top secret level. And each has its own network that can in theory secure that information so that it can't be hacked,

penetrated, ruined in some other way. Of course multiple times in history, that's gone wrong, all the time those systems are under strain, from hackers, potentially also from inside of threats. So the US is constantly trying to safeguard its information.

- I was reading about some researchers

at King's College London who recently put

Claude and ChatGPT and Jim and I

β€œand to simulate it nuclear crisis scenarios.”

And 95% of the time the AI reached for tactical nuclear weapons as a strategic option. You have spent years inside of this world of these people who are building these systems for war. And I just, I'm curious, what do you think

when you hear a finding like that? - Well, I also reach for the word terrifying. Clearly, that kind of tool is one that you really need to put safeguards around. So the US has said it doesn't want to put AI

into the nuclear controls, so that's one step. But there will be pressure on that system and decision making is already speeding up. But I certainly spoken to US military advisors who've brought me similar information.

They emphasize that AI can be escalatorious. You just describe. And also, almost a more problematic issue, psychophantic, there is a tendency to agree with the person asking the question.

So shall I go to war? Would it be a good idea to launch this missile? If the question is asked in that way, a suming an intent or an action, there is a tendency within AI also to buttress that opinion.

So as a check on opinion forming,

β€œyou need to consider AI in a really careful way.”

Now the US military knows this. This was a very advanced computer scientist telling me this. And he had been an advisor to US Central Command, the very command that is now using these chat bots. What he and others have told me at the National Geospatial Intelligence Agency

is that there are aware of these risks and they are trying to add in checks and safeguards what they call underneath the hood. So if a commander said, shall I strike this now? Is it a good idea?

Even if they were to prompt the chat bot in that way, the claim was made to me that the chat bot runs through a very fast series of checks. It read teams the question, which is to say, it pretends it is an attacker.

It checks for escalation bias. It checks for a number of different things. And by the time it spits out the answer, all of those potential problems have been factored in. Now I haven't seen that happen in real life

and I've certainly come across a lot of people who are very frustrated by the answers that chat bots give, even within the military. Sometimes fabricating attacks that haven't even happened. And if you can imagine the US needs to respond to attacks, if they're responding to an attack that was fabricated,

there is constantly this risk for escalation.

And in that sense, it's always about that critical thinking,

that framing, what question are they asking of AI? Can I win quickly if I start a war with Iran or what are the risks that this could proliferate that US service members will be harmed? That civilians will get hit?

What are the chances of achieving regime change if I seek a quick war?

β€œHow many quick wars become medium-term wars and long wars?”

Is there still that human hubris? Where AI is put will only ever be as good as the data and the question and there may still be a gap? In all of this testing is happening during an act of war right now. A lot of this testing that I've reported on happened before then,

but even at the time in February 2024, I was able to report that the US did use this system to narrow down some of the 85 targets that the US military struck in Iraq and Syria. This was in reprisal for the death of three US military personnel.

And that is the first large scale up until today's operations.

Case I know of of US central command using this system to try and bring speed and scale to war. It had been used before to assist others. It had been used on a peace-meal scale for US special operations command, but they tend to be much smaller, getting into the big army,

the big military formations. This really is war at a very joined-up and connected scale involving every service. And as we speak today, St. Com has hit more than 9,000 targets. And that certainly has relied on the system,

Maven's Mark system. Katrina, there's a man at the center of your book in this story

That most people have never heard of.

A Marine Colonel named Drew Kukor. Tell us who he is and why this moment

basically exists because of him.

β€œDrew Kukor is this very absorbing retired Marine who I met,”

who was chief of this project called Project Maven. He wasn't the director. He was the doer, the leader of this effort to bring AI to the way that America makes war. And it started publicly at least as a very narrow effort.

The idea was to bring AI to rifling through drone footage, copious video that the US is taking in various countries around the world as part of what many military operators called their G-Wat, the Global War on Terror. Now Drew Kukor had a long and frustrating career

inside the Marine Corps as an intelligence officer

and he was repeatedly fed up with the tools that he had to go into battle

and to support other military operators. He was in Afghanistan in October 2001, after 9/11, logging around a large computer.

β€œHe felt that he couldn't support the US military operators”

that intelligence was meant to keep safe. And they simply weren't able to get frontline troops, the kind of information they needed, as these very rudimentary, unsophisticated improvised explosive devices started to make and kill American service members.

And so there was a constant frustration that the US could bring to bear enormous firepower, precision firepower, but couldn't put it in the right place. And you see, as you see, in all wars, what's known as friendly fire, allied fire so the US mistakenly harming their own harming partners and allies,

and also harming and killing civilians by mistake. And there's a number of problems he began to feel could be solved with better intelligence. And if there was a way to reduce that loss when he was in Afghanistan, there were marines dying the whole time when he was in Iraq.

There were hundreds of marines dying.

β€œAnd he simply felt not that AI so much was the solution,”

but better information. And in the modern world, better information has come to mean AI. And in 2011, he worked on an effort to bring that technology from a company named Palantir Technologies to Afghanistan to start to track where these improvised explosive devices

had been before. So we're 10 years into this 20-year project that Cougar envisions.

He has always said that he feels the Department of War,

which during the time of year talking was the defense department, needed to function more like a software company than a weapons factory. But looking at Iran right now, the scale and the speed is this the war he envisioned. There's no doubt that this is an AI infused war.

And the other element of safety, accurate scope scale is people are claiming that AI makes war more efficient. Often what happens when things are more efficient is you can simply do more of it. And to hit a thousand targets in the first day,

now 9,000 targets and not yet have finished the war, the Iranians are still continuing. The Strait of Hormuz is closed. There is a question about overconfidence, about how much you can rely on these systems.

And if expanding the pace of war gets you there. And this is a long-term debate. If you go back to 1899, there was a Polish banker, Ivan Block, who brought out a paper called "Is war now impossible?"

Because he looked at these claims for mass-produced rifles that now the ways of killing was so industrialized at such scale no one would dare declare war against someone else. And instead he argued long before World War One started that actually the mass production of weaponry

would lead to stalemate human harm, long walls. And it raises this idea of is there ever a way to deliver palatable killing? Hmm. Our guest today is Bloomberg Journalist Katrina Manson.

We'll be right back after a short break. I'm Tonya Mosley, and this is Fresh Air. Support for Fresh Air comes from WHY.

Presenting the pulse, a weekly podcast about health and science.

Each episode is full of great stories and big ideas fueled

by curiosity and wonder. Can you learn to listen to your intuition? What should electric cars sound like?

β€œWhy can it be so hard to get an accurate diagnosis?”

How do fungi communicate? Check out the pulse available where you get your podcasts. This is Fresh Air. I'm Tonya Mosley, and my guest today is Bloomberg Journalist Katrina Manson. She's written a new book titled "Project Maven.

A Marine Colonel, his team, and the Dawn of AI Warfare." The book traces how Marine Colonel Drew Kukhor became instrumental in the decade-long creation of America's AI Warfare capabilities, which are now being used in the act of war in Iran. I want to talk to you a little bit about Kukhor's relationship with Palantir.

It seems to be one of the most complicated threads in your book. And Palantir, for those who aren't familiar, is a data analytics company. It helps organizations make sense of massive amounts of information.

And Kukhor became one of the most powerful internal advocates for Palantir.

β€œHow did that relationship begin and why was it so controversial?”

Kukhor learned about Palantir in the late 2000s when it was really quite a young company. And he was looking for this data analytics solution that could bring data together and deliver him a picture of war. As he said to me, it's just a very hard question to know where the enemy is and where your own people are.

And this, for him, became a tool that he really believed in. And others in the defense tech world who, in the military service, relied on Palantir, have spoken favorably of it as a tool to me. He continued this relationship and he flies over to see them. And he explains his entire vision for what becomes Maven Smart System,

a digital map, an operating system with white dots, with coordinates that ultimately

can pair a target to a weapon and shoot it. And at the time, Palantir doesn't really want to do this because he's asking them to do two things. They don't see themselves as one AI and two to create a user face. And they didn't see themselves as creating pretty user interfaces. They saw themselves as their data analytics, the crunching of that aspect.

But they went along with it and a very senior person at Palantir, Aki Jane, told me that it really is Kukhor himself who convinced Aki Jane to what he said is revisit his priors. He had a bias against AI, so did all of Palantir. And they begin to listen to Drukhor to understand how AI might support their data analytics. In addition to that, Palantir was already controversial within the Pentagon.

They had actually sued the Army in 2016 to gain access to a contract. This is a time where you really have young hungry companies beginning to say give us a contract. There's this sense that contract awards in the Pentagon are very old-fashioned. It's an expansion to slowly. So, Palantir has succeeded in getting a foothold in the Pentagon. But was seen as very arrogant by many because it had sued.

And it continued to claim its tech was the best. Whether that was true or not, the manner in which they said this urged several people. And Kukhor himself guided them not only on AI and what he wanted, but also on the manner in which

β€œthey should conduct themselves. He said, "We think you're great, but you need to turn it down."”

How would your characterise Palantir in this story? Is it an honest actor in it? I think it's really fair to see it as a very divisive company. You have got people who cheerlead for it with great passion who feel that Palantir's tech saved their lives. You also have people who think they are arrogant, risk-being, monopolistic, charging too much, and simply make tech that is good, but not as good as everyone makes out.

Even as late as 2023, a senior commander who is using Maven Smart System awards it a great C+. So right the way through, you have problems with Palantir and multiple members of the military lined up to tell me, "Okay, we're using Palantir, but if something else better comes along, will switch." I want to talk a little bit about some other active wars, particularly the war in Ukraine.

It seems the way that you've been writing about this that that's where AI warfare kind of became real at scale. When Russia invaded back in 2022,

The US deployed Maven in support of Ukrainian forces, but it almost immediate...

What happened and how did they fix it? The computer vision had been trained on the Middle East, think hot, think, sand, and suddenly it was being asked to identify Russian tanks in the snow in Ukraine, so it wasn't delivering the detections that the US wanted to rely on. Secondly, the system wasn't loading. I found out there were often

eight second delays, which in a war is a lifetime, and that was because, after a lot of investigation,

it turned out that the networking just wasn't up to it. It was in fact crisscrossing the Atlantic sometimes as much as four times, so that created delays and sometimes even packets of data could

β€œfall off the network and you might miss crucial information. So they really needed to work on the”

networking, the sort of arteries of information, and they also needed to very quickly gather up imagery of Russian equipment and retrain the algorithms, and that was going on at a very fast pace. People complained about getting phone calls to, in the morning, others welcomed them in order

to be part of this effort to support Ukraine. You know, in reading about that from you, one of the

maybe legal lines in warfare is kind of this difference between supporting an ally and fighting their war for them, and you report that the US, as you said, was passing targeting coordinates directly to Ukraine, and sometimes through signal, sometimes literally print it paper, and walking it across. By that measure, how close was the United States to actually being in

β€œthat war? I suppose that becomes a diplomatic question, and certainly the US wanted to frame”

itself as being a supporter, but not a direct participant, and that knife edge is really in the eye of the beholder, does Russia choose to see it that way, or does Russia say you've gone too far, and so the US was very, very, very, very sensitive to that, and the actual project may even operators, and those in the army who were using their system were even more sensitive because some people among their groups said we are going too far, and others said we have to help Ukraine

with everything we have, and at the time that debate was not public. There's also some elegant language, which is this term point of interest. So rather than saying we're sharing targets, we're passing targets to Ukraine, they settled on this language, or we're passing points of interest

β€œto Ukraine. Everything short of the decision to target, which was a Ukrainian own decision. But as”

even some of the people I spoke to for the book framed it, it was almost a sort of Pinocchio like relationship. The Americans potentially pulling the strings on Ukrainian decisions, and it got tighter and tighter and tighter. One reason the Pinocchio metaphor isn't fully fair is because also both sides have emphasized to me and interviews that they really developed trust. And so the

Americans ultimately were finding pieces of military equipment that are new, Ukrainian information

just looked like a truck. But on US information, they were able to say trust us, hit it, and it was in fact a transporter erected a launcher, essentially a mobile missile launcher. And that relationship got faster and faster and faster until at one point the US identified a target, and one example I'm told about, and 18 minutes later the Ukrainians were able to hit it. Let's take a short break. If you're just joining us, I'm talking with Bloomberg journalist Katrina

Manson, whose new book Project Maven traces how the United States built its AI warfare capabilities, and how those capabilities are being used right now in an act of war and Iran. We'll be back after a break. This is fresh air. If you're a super fan of fresh air with Terry Gross, we have exciting news, W.H.Y.Y. has launched a fresh air society, a leadership group dedicated to ensuring fresh air's legacy. For over 50 years, this program has brought you fascinating interviews

with favorite authors, artists, actors, and more. As a member of the Fresh Air Society, you'll receive special benefits on recognition. Learn more at wh-y-y.org/fresh air-sciety. This is fresh air. I'm Tanya Mosley, and my guest is Katrina Manson,

A Bloomberg journalist and author of Project Maven.

built its AI warfare system. Let's talk about Gaza for a moment. Israel reportedly used AI

targeting systems, gospel, and lavender in its campaign there. What does Gaza tell us about where the guard rails on AI warfare actually are? Some defend the AI saying the way it is used is down solely to policy, and others have suggested that the way the idea was prepared to potentially accept collateral damage, meaning civilian harm, and that speed would not be plaudible to the US. It just isn't the way that the US currently operates, and I should say the idea of defends

its action saying they have not broken the law of war. They have been proportionate and discriminate.

β€œThat's their position. There are also these very stark numbers of 70,000 dead. For me, a key question”

was to understand, was this defence of AI? Was it fair to try and separate AI from policy? So for those who've expressed concern at the way the IDF pursued targets in civilian harm, they have blamed policy rather than AI. So for several of the experts, I've spoken to who make the distinction that they're totally separate, the tech, and the policy. Many others are they arguing that the more you have an AI infused killing machine, the more you can use it. Which brings up something else

for me, you report that the US has already built weapons that can fly and select their own targets

β€œand kill without a human making the final call. So autonomous weapons, and you name these two”

classified programs in the book, goalkeeper and whiplash. Can you tell me briefly what they are and what does it mean that they already exist? These are efforts to bring drones in the air and on the sea into life. And this is for a very different conflict scenario. This is the US thinking about the defence of Taiwan. So if China were ever to attempt an invasion of Taiwan and if another big if the US were to decide to help defend Taiwan, there could be a very different scenario

from the one that Ukraine is facing in Russia because of jamming. So the fear is that China would

β€œdisrupt US satellite communications such that it couldn't control its own drones and the drones”

that would protect and defend Taiwan against a maritime onslaught would need to be able to function autonomously without any internet connection. And so the US has been developing these drones in a suitable autonomy for several years. Whiplash is an effort to put weapons on a jet ski that can move autonomously and goalkeeper is an effort to weaponize drones and have them fly about and be able to select and hit a target under its own steam. Exactly what campaign is from human rights

watch argued against the dawn of project Maven and what UN Secretary General has called a pursuit to something morally repugnant and politically unacceptable. That is the pursuit of lethal autonomous weapon systems. Well, I mean, what is standing in the way then of any meaningful international regulation? Because what does it actually mean that we're already at war while these particular conversations are still happening? That's such a fascinating tension. There

have been discussions at the UN body for more than 10 years now and they are still trying to

define what is an autonomous weapon system. And the US position has been let's make it first

and then let's work out what we need to regulate. That of course speaks to a fear that China might get there first. The US has wanted to dominate this technology and to be the ones who could deliver it in a way that they felt they could use it and win. But there is a push now to turn some of that work into a treaty and a treaty would, well, all accounts not include the likes of the

US or China or Israel or Russia.

been discussing, may then the autonomous weapons, this arms race between tech companies to supply

β€œthe Pentagon. I mean, all of this exists in large part because the US is preparing for a potential”

conflict with China over Taiwan. So what does this moment tell us about whether we're actually ready for that? The US has assessed that China wants to be capable of taking Taiwan by 2027. So next year, so this date has become the sort of drum beat for the US to make sure that if it wanted to, it could fend off a Chinese invasion of Taiwan as soon as next year, but at any time after that. And there's been an increasing focus since 2018 on the prospect of China being a potential

adversary, not just a competitor on the global stage, but also a military adversary.

And you see now senior US military commanders saying, quite clearly, China is rehearsing

β€œfor an invasion of Taiwan. And how the US could prevent that or help partners and allies prevent”

that is a subject of some anguish within those quite tight military circles that look at this. There's a group that has really pushed for autonomy to say there's no way we can defend Taiwan without it we need to do much more. And I was told that often Pentagon officials reassure allies and say, look, there is nothing inevitable or imminent about a Chinese invasion of Taiwan.

And if there is, we'll make sure we're ready, but then they drop their voice in the corridors of the

Pentagon and Whisper, we're not ready. And so there is this constant concern that the US needs to go faster in developing autonomy that could withstand the sort of onslaught that might be involved in an attempt to take Taiwan. One of the other things we're all kind of asking is whether we are the best custodians of this technology. And after everything that you've reported, what is your feeling? What do you come down to? I know you're a journalist, but you're also

greatly informed and you have all of these facts in front of you. When you meet people whose business is the business of war, your perspective changes because there is so much risk and there is such a long tale of experience of these forever wars. Many of the people involved in Project Maven were involved in the forever wars in Afghanistan, Iraq and saw their friends die. And they put this trust and belief in AI that that could save their

friends, that could save them, that could save America, and it could prevent if AI were big enough and bad enough, China from ever daring go to war with America. So there's this deep belief

β€œin AI as some kind of panacea. I think for me it raises the question of, what is this idea of”

a costless war? If you can make killing more remote, is that more palatable? We know that drone operators and drone screeners, drone analysts also experience post-traumatic stress and AI won't have those same reactions to watching the gore. So there is that argument that you can protect operators. I question whether you also can protect civilians by pursuing that notion of remote war. And the bigger question I have is does remote war make war more possible more likely?

Does it mean that war option? Well, someone will press play on it, not understanding the long deep impacts. So for me, there is a lot more to be done by the people who advocate for AI to use it in this way they claim it can be used to deliver a better outcome. Katrina Manson, thank you so much for your reporting and thank you for this book. Thank you. Katrina Manson is a reporter for Bloomberg. Her new book is Project Mayven,

a Marine Colonel, his team, and the Dawn of AI Warfare. This is fresh air. If you're a super fan of fresh air with Terry Gross, we have exciting news. Double UHYY has launched a fresh air society, a leadership group dedicated to ensuring fresh air's legacy. For over 50 years, this program has brought you fascinating interviews with favorite authors, artists, actors, and more. As a member of the fresh air society, you'll receive special

Benefits on recognition.

This is fresh air. The rise of AI has had seismic implications for Hollywood. Movie scripts can be written

β€œby bots, and one AI company has even created a computer-generated actor. But amid this transformation,”

one director has created an art installation that harkens to the old days of cinema. In 2000, an unknown Mexican filmmaker made waves at can with a film about a car crash titled "A Maudais Peros". The director Ali Hondro in Yaditu has now turned the film's extra footage into an art installation. Contributor Carolina Miranda, reviews the show.

Walk into the first floor gallery at the Los Angeles County Museum of Art, and you'll be

forgiven for thinking that you wandered into the building's machine room. The clatter of industrial appliances makes normal conversation a challenge. And the room is hot, even a bit steamy. But move deeper into the space and you'll find that you're actually in the middle of a movie. Large projectors display looped scenes on six screens staged around the room. All featuring

β€œsnippets from director Ali Hondroingya Ritu's first film, "A Maudais Peros", which debuted”

to much acclaim in 2000. On one screen, you catch a piece of one of the movie's brutal dogfights. On another, a hand reaches up a woman's skirt. A car chase ensues, and a brutal crash.

Then that same crash plays again from another angle. This is Swenjo Perro,

devised by Inyoritu with the help of a robust production team. The installation takes the unused scraps of his groundbreaking film and transforms them into an environment that not only plunges the viewer right into the movie, but into the act of filmmaking. You see slates marking the beginning of the action. You see takes and retakes. Occasionally, the strips of colored film at the end of a real come into view,

casting an orange light on the room. Swenjo Perro, in Spanish, translates to "dog dream."

β€œInyoritu's installation certainly feels like a dream of the original movie,”

fragmented chaotic out of order. At times you hear the convulsive explosion of the film's climactic car wreck, sometimes that same crash occurs in eerie silence. Like an actual dream, it's then up to the viewer to make sense of what the bits might mean. Like any movie, the images also function as a timestamp of the past. The old sedans look dated. One of the film's stars, Guy Algarseira Nal, is still a teenager, and the Mexico City of the

film is one that has not yet been gentrified by the digital nomads of the 21st century. As Inyoritu writes in a book about the project, a film is made of time and light. But what makes Swenjo Perro truly remarkable is its analog nature. Amoris Perro's was made before digital cameras had completely transformed movie-making.

Inyoritu, a storyteller who embraces excess, shot a million feet of film to make the movie.

But the final cut, which clocks in at about two hours and 30 minutes, used only about 13,000 feet of that footage. That left about 187 miles of film on the cutting room floor. In Inyoritu, when the word "movie" has come to mean a video you can shoot and edit on your phone, Swenjo Perro is a reminder that films once carried physical weight. A 35-millimeter real weighs about five pounds, and the average film was about two real long. The use of celluloid film

also involves photochemical processing, and displaying the work requires large projectors that generate heat and noise. Making a movie is a creative process. It used to be an industrial one too. Swenjo Perro makes this industrial nature visible and visceral. In the gallery, massive reels rotate on the large format projectors typically used in old movie houses. Long strips of 35-millimeter film travel through elaborate looping systems that reach a height

of more than six feet. In addition, the designers have pumped a small amount of fog into the gallery, making visible the beams of light projected onto each screen. To enter the space isn't simply to be surrounded by the images of Inyoritu's movie, but the mechanics that make it possible. It's a reminder of all the physical things that have been lost to the immaterial pixel. Vinyl records have given way to streaming, newspapers to websites and apps.

Directors used to haul around heavy reels to display at film festivals, now at most they carry a small hard drive. And as acts of creation have been turned over to

Artificial intelligence, Swenjo Perro stands as a reminder of what could go m...

out the human touch. The physical world full of love and pain can be a really enthralling place.

β€œCarolina Miranda reviewed Swenjo Perro on view at the Los Angeles County Museum of Art”

through July 26th. If you'd like to catch up on interviews you've missed,

like our conversation with Riz Ahmed on starring in the new series "Bait" as a British

β€œPakistani actor who's auditioned to play James Bond since his life into a spiral or with human”

rights lawyer Brian Stevenson about reflecting on the harsh truths of our nation's history.

Check out our podcast. You'll find lots of fresh air interviews and to find out what's happening

β€œbehind the scenes of our show and get our producers recommendations on what to watch, read and listen to.”

Subscribe to our free newsletter at w at yy.org/freshair. Freshair's executive producer is Sam Briger. Our technical director and engineer is Audrey Bentham. Our engineer today is Adam Stanishewski. Our interviews and reviews are produced and edited by Phyllis Myers, Roberta Sharot, Anne Marie Boltonato, Lauren Crimzel, Theresa Madden, Monique Nazareth, Susan Yakendi, and Abalman and Nico Gonzalez-Wisler. Our digital media producer

is Molly C.B. Nesfer. They are a challenger directed today's show. With Terry Gross, I'm Tonya Mosley.

Compare and Explore