The DSR Network
The DSR Network

Siliconsciousness: Are We “Careening Toward Disaster” with Hegseth’s Campaign Against “Woke AI?”

2h ago49:267,259 words
0:000:00

As the Trump regime wages war on Iran and geopolitical crises are dominating tabloid headlines, the Pentagon’s relationship with artificial intelligence is slipping under the radar. Alondra Nelson joi...

Transcript

EN

[MUSIC]

>> Welcome to Silicon Consciousness.

The DSR network podcast focusing on the artificial intelligence revolution,

politics and policy. [MUSIC] >> Hello, and welcome to DSR's Silicon Consciousness. I am David Rothko, if you're host this week, we are joined by one of the most valued minds that we get to deal with here.

Alondra Nelson holds the Harold F. Linda Cher and leads the science, technology, and social values lab at the Institute for Advanced Study. How are you doing today, Alondra? >> I'm doing great, David. How are you?

>> Good, I mean, considering the state of the world, I'm as good as this is good.

>> There's always that, that's the essence, that's the essence.

>> Yes, well, we're slugging ahead, but with that in mind, let me pause it a thought, and it's, I don't know. I mean, maybe it's just slightly contrarian, but let me suggest that while all the headlines for pretty understandable reasons this week are focused on the war with Iran

and the unrest in the Middle East, that in the long term, we may conclude that the bigger story of the past week is that the Pentagon has come out of the closet, and essentially said first with regard to its dealings with the Anthropic and then with regard to its dealings with Open AI,

here's what we want. We don't want any of your darn guidelines, just like Pete Hanks said, doesn't like, you know, crazy rules of engagement. And he finds international law to be a real impediment to what he's doing. And in particular, apparently they've been pushing on the idea

that if they want AI to do surveillance on the American people, even though that's illegal, they should be able to do that. And if they want AI to be involved in autonomous weapons platforms, to decide for themselves who and when to kill, then that's up to them.

They don't want any constraints. And to me, this seems like a really big issue in the evolution of AI, even if it's not, of course, fully resolved in any one week. And I'm just wondering what your thoughts are on this conflict. And frankly, as soon as I start reading about it,

I want to talk to a lot. Well, I want to talk to you, because you have such good thoughts about this as well.

I think, you know, you laid it out as being a contrary,

and but these things are, of course, related, because we know from reporting from this week, of course, that Claude, whatever tool system that Claude is being used, is being used in the theater of war right now in Iran. And so, you know, these things are sort of deeply integrated.

But let's just go back a bit. I mean, the whole thing is just curious. You know, it's on the spectrum from curious to bonkers. And I think that's where we all live. We live in that spectrum these days.

Welcome to 2026. You know, so first of all, the, you know, it is the case that the Trump administration, that made this agreement with anthropic. But it's also the case that the second Trump administration blessed it. So they had an opportunity to say, we don't want this contract.

We're going to start from scratch with our AI contractors.

But they looked at, took a look at it, said, this is amazing.

I think that's in part because, you know,

that the technology is just really good. I mean, it's probably the case that the anthropic quad tool. There, you know, Dario Amade was on CBS a couple of days ago. And, you know, he basically said that not only do, you know, are we, is the federal government using cloud?

They're not getting the slower version. You know, you remember David from working in government. You sometimes got the slower software because it was had that all the national security apparatus on it. Yeah, when I drove the government, when I joined the government, the state department was

still using Wang word processors. So there we go, right? So what we learned in this interview, the CBS interview was that Dario, that the anthropic has been creating a new state of the art model that's better than what's on the market for even enterprise for the federal government.

And it's been a classified cloud and it's strong and powerful.

And it's got dedicated compute. So this is a great powerful product. And so it's just, that's curious as well. So, and you think why not just cancel the contract as opposed to, you know, kind of throwing

A fit.

I mean, I think a Secretary Hegg said said, as you suggested. And also kind of trying to play the nuclear option either, you know, the Defense Production Act or designating anthropic as it did or cloud. I don't know if it's a cloud or anthropic, but it's designated as a supply chain risk.

So I think, and this is my contrary in response to you, that the overplay of attempting

to sort of dominate the conditions of anthropics work is actually a source is actually a demonstrating that the government feels actually sort of somewhat weak vis-a-vis these

powerful technology companies.

So we're living in a moment in which it's no longer the case that all of the powerful technology comes out of DARPA or IRPA and then gets brought, you know, gets said that it's secure enough to go to the commercial space and gets, you know, becomes, you know, our dashboards in our car, those sorts of things. All of the powerful technologies are coming out of private sector companies.

And so this is a different world for the way that we thought about the relationship and national security relationship between kind of research and companies and the private sector and the government sector. And as much as it looks like, you know, Secretary Hegseps sort of throwing the hammer down on anthropic with the supply chain risk, I think what it actually revealed was actually

how powerful these companies are.

They have all the data, they have all the technology, and, you know, they might lose this contract, but it took a lot for the federal government to get there in part because I think anthropic, at this moment, probably has the best in class product. And, you know, the fact that also it gave a six month on ramp, off ramp, you know, let's see what all that looks like.

To stay up to date on all the news that you need to know, there's no better place than

right here on the DSR network, and there's no better way to enjoy the DSR network than by becoming a member. This is enjoying ad-free listening experience, access to our discord community, exclusive

content, early episode access, and more.

Use code in DSR 26 for a 25% off discount on sign up at the DSR network.com. This is code in DSR 26 at the DSR network.com slash by thank you and enjoy the show. Well, no, I think that's obviously an important insight into this thing, but I think another part of this is Secretary Heggseth essentially saying, we want MillSpec AI to be whatever we want it to be, and we don't want artificial rules imposed upon us.

And if we want to, you know, let the genius out of the bottle, it's up to us to decide to do that and not up to some AI company. And then, you know, complicating this all somewhat was the fact that you have open AI, which once allegedly was, you know, doing all this for good, for the good of the world, had Sam Altman coming in and sort of negotiating while the anthropic thing was going on.

And speaking kind of out of both sides of his mouth on this, sort of the one hand, yeah, we want to do good and we have constraints and on the other hand, what's the deal you want, DOD will take it. And that competition, along with the defense department impulse, does raise the specter

of very powerful AI with less guardrails than some people think is wise to impose?

I think, yeah, that's right. I mean, that's where we're, you know, we're creating towards disaster here. I think that's right, but it's, you know, it's such an odd thing because you, you know, let's just think back to like Microsoft Windows 95, if the federal government was going to have a national security air gap version of that, they wouldn't have said to Microsoft,

you know, that you buy it off the shelf, you download it, you ask for some national security kind of constraints around it. But the product is the product and they signed a contract with anthropic for the product. And moreover, they looked at the contract that the Biden administration signed and then signed it despite themselves.

So, you know, so it is this kind of weird AI exception, and I think this goes to a couple of things. One, you know, that we've not had a kind of product, dual-use product, dual-use sort of technology like AI that has the ability to be dynamically changed sort of constantly. So it could be the case when it's not a air gap that like, you know, there could be engineers

from Anthropic who are updating the technology quite regularly.

It's also, you know, and this goes to the point like, don't tell us what to do.

I think part of the don't tell us what to do with the technology is because they know that the companies distinctly understand how they work. And this is different from, you know, even more, you know, it gives the companies even more

power than, you know, I think Microsoft with the Office Suite or, you know, Accenture, etc.

So I do think that the companies are, I think we're still in the situation where the

companies are going to be distinctly powerful in this space.

And as I understand the contract, so the Anthropic contract was supposed to be up to $200 million, as I understand it only 2 million of that had been spent. So it wasn't, like, you know, as we're seeing valuations for Anthropic go through the roof, it wasn't the federal government or the Department of Defense that was driving those, we also know that open AI Anthropic and some other companies are giving $1, you

know, they're selling the technologies and tools to the federal agencies for a dollar. So, you know, obviously this is a long clay and over, you know, if we think about Dell computers over 20 years, you're making, you know, billions of dollars. But right now, you know, it's, it's not the money that's the thing.

And so I think, you know, to go back to Secretary Heggseth, I mean, his statement about,

I think it was, you know, Woke AI and so it wasn't just that we want control over the

technology. It was also this whole elaborate narrative about the technology, you know, being woke and not sort of being able to have any constraints at all and needing to sort of dominate the space and all of that. And that really flies in the face of, you know, what we know the American public wants.

I mean, I think since the last time we've spoke in David, the negative sentiment from the American public has gone up by like 10 points with regards to AI. And I think this, you know, this recent sort of controversy that how really has captured the attention of the public in ways a lot of AI news, I think, doesn't, is only going to increase that, you know, that negative sentiment.

I mean, it seems quite common sense to call too many of us to say we shouldn't have, you know, unmanned robot drones and master valence in American society.

And so, you know, I think between the fact that the companies will always know more because

these companies, the technologies are being developed there. And the growing, I think, bipartisan concern about AI technologies, that, you know, I think the, you know, Secretary Hugseth and the Department of Defense might be facing some headwands ultimately about this decision. Well, one only hopes, you know, I mean, Hugseth's very big on the warrior ethos and very

big on being anti-woke. And the notion that somehow he's suggesting that, I don't know, that it's, that it's woken to not want to have autonomous killer robots is, you know, that's, that's kind of, kind of unnerving. And you know, it also raises the question of, what's happening behind the scenes on black

budgets from companies that are, you know, more inclined to play along, you know, what's how interior doing behind the scenes, you know, because we know that, you know, they don't share some of the scruples that anthropic may have. Yeah, I think that's really worrisome. The other, you know, related to this is that, let's be clear, anthropic saying, you know,

our red lines are mass surveillance of Americans and these, you know, unmanned, unpersonned, erupt robot drones without human and loop, that's the low bar, you know, so it, it makes a question like, what's happening behind the scenes with other companies? Yes, but it also, you know, my concern is a little bit that we have set these red lines, which are bright red lines as the high bar when it's actually the low bar.

There's lots of other ways that we should be trying to steer, constrain and, you know, and sort of place guardrails around AI for lots of other harms from, you know, CCM to harms to young people, mental health harms, you know, people not being able to get a fair shake for rights and opportunities, there's actually a lot of harms to go around.

And so while it, I think it was important to identify these, you know, we've got this

two-headed, you know, this two-sided, kind of Janice face thing, we're on the one hand, what's going on behind the scenes, we've seen even open AI correcting a little bit on what they said that they were going to, what their red lines were and going back and saying, well, oh, no, that's not exactly what we meant, DOD, we want to have a little bit more clarity about what we mean on mass surveillance, for example. But of course, it sort of moves the bright object

over here as opposed to the sort of growing cacophony of, you know, public outrage about

Everything from, you know, data centers to mental health and suicide risks fo...

world. This podcast is underwritten in part by the U.S. Embassy of the United Arab Emirates.

It's editorial content is completely independent and the views expressed are exclusively those of participating experts. It is presented live without editing. For further information about the UAE's efforts in the areas of artificial intelligence and technology, go to the website of the Embassy at www.ue-emBC.org and search for UAE-US tech cooperation. We thank them for their support, we thank everybody who is supporting this podcast for their support and we look forward

to it developing and growing over time because the issue is so important.

Yeah, no, I want to get to that specification one second, but just one last question on this

particular track. And that is, you know, part of this to me is complicated by the fact that unlike Windows 95, AI is a tool that invents new tools. And so, you know, you start, you baseline it, you set certain parameters within it, and that impacts then how it can be applied out in a battle space, for example. And, you know, a lot about the how the future of civilization goes is going to be shaped by whether or not a platoon commander is going to be able to say, hey,

you know, military chat buddy, how do I depopulate this village? Give me five options.

You know, it takes you to, you know, if, if the, the tool is unlimited and how it can answer that question, the potential for mayhem grows. And I'm, you know, I mean, I could do the same kind of a questioning along the lines of surveillance, right? But, but it, the AI is the slipperyest technology slope we've ever been on because it contains the seeds of successors that we aren't imagining right now. Yeah, that's right. And we're seeing that, I mean, we're the sort of evolution

of the technology, of course, is now moving into these AI agents, which we see that we are really struggling to control. And, you know, I'm in a lot of conversations now with, you know, people trying to think about what should be the guardrails or the standards for how we use agents, because we get, you know, accounts from social media of, you know, technology experts and executives

who are having their emails disappear, their hard drives disappear from the use of the agents, right?

So, not only is it the sort of self-generative ability of the technologies, but there is still just, and I think, you know, this is what, you know, Dario Amadeh has said in a few interviews, like, we just can't fully control them. And so, you know, because they've got this, you know, autonomous ability. And so, it's not just, you know, and that's imagining, that's just imagining the good actors, that's not imagining the bad actors. So, we want, you know, you'd want to

maybe assume that the, you know, the military leader that you were talking about was sort of on our team, whatever that means in this very complicated time. That's not accounting for how these tools and technologies are being developed and used. Outside of the U.S., all of the open

source Chinese models that are getting increasingly stronger, we hear there's going to be, I think,

another deep-sea model being released next week, that, you know, race this, make this a larger issue, as well. But again, it's like, is the government, even in the theater of war, going to be, you know, the exemplar or the race to the bottom here. And it seems like we are, unfortunately, choosing the latter. Yeah, of course, the, the, the, the, the, the kind of veiling factors here, you know, a lot of Americans, I talked to still underestimate the degree to which China in many areas,

particularly of a plight AI is edging ahead of us. And in some areas is quite a bit ahead of us. And you think, well, China might be a malevolent actor, but the Chinese also have this issue of wanting

The state to control everything.

used by individuals. I can only imagine what's going through the minds of AI scientists in Iran,

right now. And there are AI scientists in Iran who are saying, how do we, you know, what's what, what is the equivalent of having a nuclear weapon with AI that can protect us from being beat up by some big country on the outside or gives us a, and ability to retaliate when they're blowing up all of our ICBM manufacturing capability. And, and that's the, the, the, in many respects, the worst kind of bad actor you're thinking of, right? Yeah, but you're in we're also seeing,

you know, AI being used there a little bit if I understand how these Shahead missiles, you know, work that these drones. And we're also seeing a little bit of a replay of the Russia Ukraine kind of asymmetrical

warfare where, you know, Ukraine. And I think in this case, Iran are, you know, using these small drones

to, to sort of, um, uh, try to, to attack sort of the larger, more expensive, I think, American and Israeli technologies. So, um, you know, AI, small to the, AI, to the small and AI to the large, I think, is being, is, is totally transforming, um, this war. And I don't know if we'll call it the first kind of full-scale AI war or not, maybe we'll call that Russia Ukraine. But it is certainly, um, the case that it adds all sorts of unintended consequences. I mean, one looming question,

we know early on and the war and the first day of the war, um, that, uh, you know, a lot of children were killed at a school. I think nearly a hundred was that, you know, an AI tool in AI, um, you know, drone kind of gone astray, uh, or was that intentional, or was that, you know, quote unquote collateral damage.

I think we don't know. And I think the, um, those kinds of, um, uh, you know, those kinds of kind

of mass-deficurances, um, uh, and, and war are always kind of devastating, um, but more challenging

and devastating still if it's because an AI made a mistake. All right. Um, I think. Yeah, you know, and it also, it underscores another thing, which is, and we've talked about this before. You know, in Washington and in the United States to some extent, because, you know, everybody's motivated by how do I capitalize my company and make the most most most money I can. There's a lot of talk about, uh, uh, sort of AI has a big concept, AGI, right? And, and in China, particularly, they're

doing a particularly good job, focusing on a plight AI. And frankly, to me, most AI is a plight AI. And your example is a good one where you could take a relatively cheap drone or some other relatively, and if you put a little bit of AI secret sauce in that relatively cheap drone, so it can avoid radar defenses. It can be a much more deadly weapon. And so while there's, you know, lots of conversations at the Rand Corporation going on, saying,

how do we avoid a singularity that destroys the whole planet? Meanwhile, hundreds or thousands of people are being killed by quite narrow applied AI. And, and I think we need to,

I think, be more sensitive to the risks associated with that, right?

No, I think that's right. And, and we saw from, you know, Ukraine, I mean, some of the footage of the Ukrainian shoulder soldiers were then literally taking drones out of boxes and sort of just re, you know, kind of slightly reconfiguring them in part with a little bit of H.I. to kind of send them into the the battlefield. So it's a, you know, it's, it's not the biggest AI only, right? You know, it's, there's other, it's an interesting, I think, moment for, for national security and thinking

about these issues, for sure. Well, let's let shift to another thing you've mentioned. And I think it's super striking to me, you know, I've been doing this for these podcasts and the events that

we do for a couple of years now. And one of the things that's kind of amazing to me is that when

I started doing this, the level of awareness of AI was much lower. But there was this kind of approach to AI that was, I don't know, akin to other technological developments, excitement about its possibilities and so forth. And in the course of two years, because of steps taken by, I am for people, I think, but there is this growing sense in the population. And frankly, particularly kind of millennial and gen Z population that AI is just bad. It's, you know, I talked

To people and it's not nuanced, it's not, well, there's good AI and there's b...

AI will steal our jobs, AI will steal our AI, AI will steal our humanity,

we can't let this happen. I will avoid it if there's AI in my phone, I won't get an AI phone. In some ways, it is the most remarkable fail in sort of public communications that I've ever seen from an industry. I mean, at least Donald Trump periodically talks about wonderful clean coal,

he's wrong, but, you know, people defending that. But you must be going out there and talking

to groups of people. And they go, oh, you're a bad person because you're dealing with AI. Yes, I mean, people are really, you know, there was a kind of lag of awareness and, you know, there was a kind of moment where chat GPT was released and everyone was like, wow, what is this?

And, you know, we had some data that suggested that there was a lot of it, people, everyone, you know,

had huge, a lot of people in the world had huge chat GPT once. But then I think it was not clear what it was for. And then there was a lot of concerns exactly, as you say, about AP, about people's images, you know, we had the strike with actors and screenwriters. And now in the public, there's just a lot of angst. And I think there's, you know, you mentioned the younger, you know, younger folks. I mean, these are also people who are like using some of them are using flip phones,

they're using, you know, analog rather than digital camera, like film cameras. Like there also is,

I think, among young people, a kind of analog renaissance that's interesting and that is a

partly reaction to knowing that there are problems with, you know, addiction and social media,

that people spend too much time and that they want to sort of capture their time back. There's also, I think, with high school and college students, they've been told their own, their whole lives, you know, you think about the emergence of cloud code, learn to code, like go to coding camp, you middle school, you know, over the summer's high school, get a computer science degree, learn to be a programmer and a coder and, you know, the kind of golden

road of your career and your life will be laid ahead of you. And, you know, now we have very expert folks who work in, you know, software and code saying we can't get jobs. And so there's the general broader existential to the extent that the industry was telling a story about itself, part of the story was like, people are going to lose their jobs. We don't know what to do about it, you know, get ready or, you know, but, you know, the other story is that, you know, people aren't

going to have jobs, young people who've been, you know, doing all the right things aren't going to have jobs on the other side of that in part because, you know, maybe the McKenzie analyst jobs or the, you know, even looking at the state of American universities like being, you know, getting a master's degree or going to getting a PhD or not going to be kind of viable options in the same way. So there's just a lot of, you know, you know, I don't know if it's quite a moral panic,

but there's a panic. And the, it's not gotten usually the kind of adoption is like more people adopt more people get used to it. There's a kind of normalization in society that often happens with technologies and I think we're seeing a kind of reverse trend line here where the panic is just really increasing and more things are being thrown into the culture of the planet, right? So there's ever more issues and now we can bring to that, you know, the concerns about master valence and warfare

and malike. And so, you know, I think, you know, I think there's a lot that's wrong with big tech in the AI industry. And so, you know, we're missed to offer them advice, but they certainly could do themselves a world of good by being able to, I think, tell and support a story about what these technologies could do. That was actually really demonstrating their own constraint and the good outcomes for people's lives rather than the threats to people's livelihoods that are being

that seem to be around the corner. To stay up to date on all the news that you need to know,

there's no better place than right here on the DSR network. And there's no better way to enjoy the DSR network than by becoming a member. Members enjoying ad-free listening experience, access to our discord community, exclusive content, early episode access, and more. Use code DSR 26 for 25% off discount on sign up at the DSR network.com. That's code in DSR 26 at the DSR network.com/buy. Thank you and enjoy the show.

What I want to do is not to get a lot of students.

behind the internet, so it's a master-session. You can say that you can't get the correct answer.

You're a master-er, right? But you don't understand. Egal, it's a business trip. It's a business trip. It's a business trip. It's a business trip.

And if you then work, you can buy it. That's right. Save. What else do you want to do?

Things we've talked about in the past, and this relates to this. Because a couple of years ago, when I started to do this stuff, and you were doing this stuff, you'd hear a lot of conversations from people saying, "Well, the tech community is uncomfortable with Washington. It doesn't even know how to talk to Washington. There aren't many people in Washington who understand it."

And there was a story from public citizen just a week ago.

Here's the opening paragraph. More than 3,500 lobbyists, one quarter of those working at the federal level reported lobbying on artificial intelligence issues, at least once in 2025, according to a new report from public citizen. The report also found that over the last three years, the number of AI issue lobbyists on Capitol Hill has grown by nearly 170% and the number of data center lobbyists has grown by an astonishing 500%. That's just the opening paragraph.

And you know, I've heard different estimates that 25%, 35%, 45%, 40% of the money being spent in DC is lobbying around AI in related tech issues. And I see you as kind of having called this one before others because, you know, when this administration came in, I remember we had a conversation about how they didn't want regulation and you said, "No, they want a different kind of regulation." And you don't have, you know, hundreds of lobbyists

in Washington, if you're not producing legislation and outcomes, they're spending millions and millions of dollars trying to create the AI ecosystem of tomorrow to set the ground rules for the AI ecosystem of tomorrow. And frankly, at this point, it's kind of the big money game in Washington

and it wasn't a game three years ago. It's an amazing transformation.

Absolutely, listen. I mean, you know, we should take seriously the fact that the, you know, economic data suggests that if not for the like three or four companies that our economy would have been in a recession last year, right? So if not for that sort of AI boom and a few, you know, the valuations on, you know, the sort of top five to seven American-based technology companies, the whole world's global economic picture would look entirely different, including in the

United States. And so, you know, as a friend of mine says, this is real money. I mean, we're talking

about money that is the driver of the World Economic System. And so that's why all those lobbyists

are there. I mean, that the money that they're spending on lobbyists is, you know, probably arounding error based on some of the, if you think about what's happening with Nvidia and the valuations for, you know, open AI and anthropic in addition to their partners, like, you know, meta and Microsoft. So, and so go these companies in some way, so go the world. And they are right there in DC trying to give shape to it. I mean, I think for me, the thing that's

frustrating is that the former policymaker in a policy advisor is that they are, you know, using these funds to sort of play all sides against the middle. So they're often lobbying on both sides of things. And, you know, in order to just be totally not effective, right? And so that the outcome is that we don't really have any laws because they're kind of distributing donations and distributing kind of policy advice that kind of across broadly across the spectrum.

And of course, we know that open AI and also anthropic have recently and meta have all announced packs, you know, to sort of lobby in different places around AI. So, it is quite a scrum. And it means that, you know, for the American public, I mean, you know, how do you even get

you know, a legend, that conversation? And, you know, I think that's why the data center conversation

has become so important. I mean, it is, you know, a really important part of that AI stack. So it's, you know, and it's not for not that, you know, very early on in the Trump administration, Sam Altman, you know, and others were at the White House talking about Stargate, this like major, you know,

Sort of energy and compute facility because that's a big driver.

hits people's lives and where it makes sense to them. But that's not going to be enough, actually,

to combat this, you know, getting, you know, saying that people, the president Trump now says, that the companies got to pay for their own power and pay to sort of modernize their own grids, if they're going to, you know, be in communities, building data centers is not going to be enough to,

to sort of hold back all of this lobbying power. And it's really extraordinary. And it's, I think,

you know, the challenge of this moment is that, and every part of this AI ecosystem. So if you're whether you're talking about energy, the hunger for energy and energy consumption, the same for city-conductor chips, you know, lobbying, all of the, you know, all of the things along the, the sort of ecosystem are all at scale, you know, bigger than I think that we are economy and

our sort of policy ecosystem has ever dealt with. It's a quite incredible challenge.

Well, I think another thing about it, and it's part of this transformative couple of years, which probably dates back to the introduction of chatGPT a couple of an app years ago, and all of a sudden people's growing awareness of this thing. But, you know, it's not just an issue for specialists. It's not just an issue for the federal government. We're doing a bunch of events over the course of next year on state and local AI regulation, because state and local

in many ways is where the action is. You mentioned data centers, and, you know, you've got Illinois with a trailblazing data center law, and you've got the president of the United States who understands very little about this, even though his uncle went to MIT. I see often tells us, but, but the president of the United States saying, well, if you build a data center, you're going to have to pay for the power, because they realize this is a huge issue.

You've got a candidate running for Congress in New York in a very crowded, congressional primary who's the computer science major who went into the state legislature and started passing laws that had to do with AI at the state level, and he's gaining traction because average people are going, oh, this is a big deal for your city, for your town, for your county, for your state, and that's also to me just a fascinating phenomenon that suggests that this ecosystem, you know,

in our first conversation we talked about the global north versus the global south, right?

You know, we were talking on this big scale, right? And those issues remain. And there's national issues in competing European, US, and Chinese models, but it's good and kind of micro, so that it's getting down, towns, and at school systems, and things like that. Yeah, I mean, you know, the city of New York, so you're talking about Alex Boris, and that's the

New York 12 district election. I mean, it's actually incredible that the, you know, New York state

and part through his leadership in legislation was able to pass kind of AI safety legislation that doesn't look too dissimilar from what they were able to pass in California. So two major states, you know, you could imagine, you know, I think legislators who are against AI regulation talking about the patchwork, but if you can get Illinois, California, and New York, for example, to really kind of be aligned on the fundamentals, then you're really beginning to build

a national consensus around legislation. I think so I think that's important to watch for exactly

the reason using just. I'd also raise into this conversation, you know, the fact that, you know, one of the things I worked on in the Biden administration was this AI Bill of Rights, Ron DeSantis introduced an AI Bill of Rights, and it's in the legislature in the Florida State Legislature. It has a few elements that are exactly the same as the AI Bill of Rights that we introduce in the Biden administration. I think, you know, you don't think Joe Biden and Ron DeSantis

as sort of, you know, that's kind of strange bedbellows in the policy space, but, you know, what the Florida AI Bill of Rights sort of proposes is that you shouldn't, you know, that insurance decision shouldn't be made by you just by, about you just by algorithm, that we've got to do something about CSAM, we've got to do something about harms to young people, that, you know, you shouldn't be discriminated against because of the use of algorithms in AI. And so, you know, I think that

that there's becoming a sort of, a common sense baseline, you know, across the states or there are these these legislative asks, these legislative ideas that are coming back again and again, that are not necessarily the patchwork, but in fact, beginning to really fill out a really full

I think consensus based idea about some fairly basic get guard routes for AI.

Yeah, by the way, I don't want to, I mean, you know, we talk about how AI is progressing so rapidly that you can barely keep up with that. I feel the same way about our conversations and

we start it and then it's 40 minutes later and we have to finish it and that's, that's why

keep coming back to you because we never get finished, but I don't want to end that conversation

without saying that one of the reasons that these bill of rights are merging and building off of what you guys work on it, you've California and other examples in the mix, is that there are other people out there, including some very prominent people who seem to prefer AI that did discriminate, that would that would prefer to build an AI that reflected their worldview, but did so insidiously deep inside and algorithm that most people don't really know is there and that's why

all this vigilance is so important. Yeah, I think that's right. I mean, we see that with, you know, that the Chinese actually have pretty stringent, you know, a pretty robust AI governance, but part of that AI governance says that the AI models have to be consistent with the ideology of the CCP. So, you know, that's a problem, but we also see, you know, the the MECA Hitler example on Grock AI, you know, so there's challenges all around, I mean, listen, we know that automated

system and AI are created by people, people make choices about what's in the data sets, and they're

always going to have an a technical sense, a kind of bias, but they don't, because they have a bias,

we can say, this tool's more likely to do that or that or have this perspective, but that doesn't have to inherently mean that they're discriminatory, you know, and they're, you know, explicitly, and, you know, we can make choices about that as communities.

Yeah, and we have to, I think the reality is that many of us, many of us people who are

listening to this thing are saying, well, yeah, they're bad people out there who have biases, but we all advise us. And of course, some of them are, you know, sort of culturally baked into our cakes, and we need people to be vigilant to make sure that we don't amplify those things

times a million, inside some kind of an algorithm, and that's one of the big challenges here.

Well, along here, and I think, yep, go, go, go on, go on. No, I think, you know, I think, yeah, I think we're going to, you know, this is a new technology, it's still really early days, and it's even though it sort of feels like we've been living in this

space for a long time, and we're going to make mistakes, but like I think we want to, you know,

if you're going to turn the dial wrong and get the bias wrong in a model, you want it to be in ways that protect young people, you know, and protect people who are, you know, sort of vulnerable environmental health from, you know, suicidal tendencies. You want it to be, you know, the red lines around surveillance, and, you know, automated warfare. I mean, like, these are, if we're going to get it wrong, let's get it wrong to the, to the good, I think. Yeah, no, no question about it, and

it's why conversations like this are important, it's why the work that you have been doing for long time is important. And, you know, it's one of the reasons that we'll keep inviting you back because I find it so valuable to talk to you, and I know that our audience, which is growing fairly dramatically, also finds it valuable to talk to you. So, thank you, Alondra, once again, time's a million, really useful, and be well, and we'll invite you back sometime soon, and thank

everybody for listening. Please join us every week here. It's all consciousness, join us at our AI, energy, and climate podcasts, join us for all the DSR podcasts that we do each and every day, and increasingly, we find people are wanting to follow us by subscribing and YouTube. Go ahead, do that too. Anyway, many thanks, bye-bye.

Compare and Explore