The Ezra Klein Show
The Ezra Klein Show

Why Are Palantir and OpenAI Scared of Alex Bores?

3h ago1:32:4616,311 words
0:000:00

Leading the Future, a super PAC whose funders include the founders of companies like Palantir and OpenAI, is spending millions of dollars this election cycle, and a considerable amount of that money i...

Transcript

EN

I gave my brother a New York Times subscription.

We changed articles, and so having read the same article, we can discuss it.

She sent you your long subscription so I have access to all the games.

The New York Times contributes to our quality time together. It enriches our relationship. It was such a cool and thoughtful gift. We're reading the same stuff, we're making the same food, we're on the same page. Learn more about giving a New York Times subscription as a gift.

At nytimes.com/gift. If you are living in New York's 12th Congressional District, you may have seen these endless attacks on Alex Boris, one of the Democrats running there. He made hundreds of thousands of dollars building and selling the tech for ice, enabling ice, and powering their deportations while making bank.

Now he's running from his past, ice is powered by Boris's tech. Boris did work for Palantir. The rest of that attack is not what you might call true, but what interests me is who is paying for it. The super PAC leading the future, and it subsidiary think big.

Who funds the super PAC leading the future? Well, among their big donors are co-founders of open AI, and recent Horowitz and wait for it, Palantir.

So why is a co-founder of Palantir, Jolonsdale, in this case?

Funding a super PAC to try to destroy candidate on the grounds that he once worked for Palantir. The reason is leading the future is a super PAC dedicated to destroying anyone who might regulate the tech industry in general, or AI specifically in a way these funders don't like. And Boris is a member of the New York State Assembly, co-authored and passed the Raz Act. One of the first pieces of air regulation passed in any major state.

There is a principle here that is much more important than any single congressional seat. You'll hear it, honestly, if you just listen to AI founders talk. They say they believe in it. Sam Altman, a co-founder of Open AI, who would be said has been horribly targeted in recent violent attacks by anti-AI individuals.

He was trying to cool down temperatures here, writing, "It is important that the democratic

process remains more powerful than companies."

It is important that the democratic process remains more powerful than companies. Altman is right. But it's his co-founder, Greg Brockman, who is one of the major donors' fleeting the future, who is trying to make sure that democratic process is subordinate to the companies, and is trying to do it by funding a super PAC that can unleash enough money to crush any legislators

who cross them. Boris, in general, has been a pretty effective legislator. In just over three years in the New York State Assembly's past 30 bills, and has been recognized by the Center for Effective Law Making as one of the most effective freshman legislators. But it's his ideas on regulating AI, the particularly interests me.

In part because I think they make sense, and are worth discussing, things like an AI dividend.

But in part because I just really do not want to live in the world, that leading the future is trying to create. A world where the AI industry hovers in enough money, they can then destroy anyone who might regulate them. What's funny about all this is, you'll hear it.

Alex Boris, not an anti-AI kind of guy. I think he gets to have pretty well. I think he's trying to balance its risks and its possibilities. But if you're looking for a pure AI backlash candidate, he's not it. I think that tells you something that will leading the future and super PACs and groups

that might emerge like it or actually trying to do is to stop anyone from legislating on AI. So if the Democratic process is actually going to mean something here, ideas are going to have to speak louder than this kind of money. So I wanted to hear what Boris would actually do if given the chance.

As always, my email, as we're calling, show it in my times.com.

Alex Boris, welcome to the show. Thanks for having me. So I want to begin a bit in your early political memories. How did your politics begin? Well, it began with something that I wouldn't necessarily call politics only in retrospect.

Would I put that phrase on it? But it was with my parents in union fights. In second grade, my dad and his colleagues were locked out by Disney for fighting for better

Health care.

Her contract of speech for over a year in Disney wouldn't budge and finally the workers

went on strike and in response, Disney locked them out for three months and cut off their

healthcare benefits, including my dad's friend who was about to start chemotherapy. And thankfully, the union stepped in and they paid for the treatment and he survived. But my dad would pick me up from second grade and bring me to the picket line. And that was my first experience of people working together for change. He would put me in front of the Disney store and we've all seen people walk past picket lines.

It's not hard to do. It's a lot harder to walk past an eight year old with a sign that says Disney is mean to my dad. And so that was my first lesson, both that healthcare needs to be universal. But also that the way we win is by working together.

That if you're one worker, you're one person, you're one anything advocating.

It's easy to get crushed. But if you have a union, you have an organization, you have a campaign, you have a movement. Well, then you stand a chance. What did your dad do for Disney? My dad was a work from Monday night football at the time.

So he did graphics and videotape and instant replay. He worked in the trucks, eventually became a technical director. But he was one of the people that's actually sending out the signal before it hits your TV. And so you then study industrial labor relations. That's Cornell and then get a computer science degree.

I'm curious about what those two very different disciplines taught you.

Well, they sound very different, but every day it seems to be more and more intertwined. At the School of Industrial and Labor Relations, I learned economic theory. I learned collective bargaining. I learned how to run campaigns and organizations in ways that actually can change power and wind things. And I learned to stand up for working people and to view a lot of interactions in the world through that lens. I'd be specific about that.

What did you learn about how to stand up for working people?

My freshman year, we ran a campaign against Nike. Cornell was sponsored by Nike, our athletic team, sponsored by Nike. So I was part of a group called Cornell Students Against Sweatshops. It was affiliated with USAS, United students against Sweatshops. And they taught us how to build a campaign over time.

We learned how to be strategic. So you start with a clear demand. In this case, it was Nike had laid off 1800 workers in Honduras without giving them legally mandated severance pay. And we argued that the Cornell Code of Conduct required that Nike be responsible for their subcontractors actions that they make the workers whole. So I put that into demand.

Then you build up over a period of educating. And so we'd have teachings, we'd have sort of ridiculous actions to grab attention. We did a working out for workers' rights where we were in the quad and just playing 80s music and getting people like, "Hey, what's going on? Oh, well, let me talk to you about what's going on in Honduras." And then you build up to more aggressive actions that require a reaction from the administration.

We ended up being in successful in that campaign. Cornell decided it was going to cut its contracts.

And I think something like three weeks after Cornell made that announcement, Nike about face,

paid the workers all the money they were owed and gave them job training and health care for a year. So how are you to tell me about how you learn to do activism in college? Which is interesting. Yeah, but I want to go a level deeper than that. You're doing industrial and labor relations.

Yeah, what is the deeper theory or thesis of the relationship between workers and corporations between labor and capital that you came out of that with? There's so much that's in contention between workers and capital. But in the best worlds, how you're actually working together to grow the economy, that workers are not out there to bankrupt any company that they want the company to grow.

And so there's fights over how you distribute the pie, but theoretically both want to grow that pie. And then there's really interesting relationships internationally. One of the things that I discovered was for so many of the countries where we thought labor conditions were awful. The laws on the books were actually quite good.

The question was with enforcement. And if the home countries actually tried to do enforcement, the factors were just up and leave and go somewhere else. So the lever where maybe you can change that is in the countries that are buying most of the goods. And so we would apply pressure in the US about holding countries to the standards they had already

set up for their workers. So I feel like you're describing to me the education of a young radical here.

You're walking picket lines at eight, you're studying industrial labor relati...

doing anti-corporate malfeasance campaigns, skeptical of globalization.

How do you end up a pallet here? Yeah. So I really wanted to be a lawyer, but every lawyer I spoke to told me not to be a lawyer. That was my experience, too.

There were a take time off in between, make sure that's what you want to do.

And so I went to a economic litigation consulting firm called Cornerstone Research where we were preparing expert witnesses for trial. And so we were doing economic modeling and playing with data, but I was interacting with

lawyers all the time. So building a skill set, but could see what they were doing.

And I found I really enjoyed the economic modeling, I really enjoyed playing with data. And also to that ideology, as I'm growing up, I'm a Democrat. I believe that government can and should be a force for good, but that also means we take on the burden of proving it. And so I was a young believer in, I probably wouldn't have put in these terms back then, but expanding government capacity in making sure government's actually delivering.

And pallet here in 2014, right, in the Obama administration, was about how can we expand government capacity while protecting privacy and civil liberties? And so at the time,

it felt like very much the natural fit. So I want to stay in this 2014 moment,

because this is a period when there is a lot of optimism. The technology is going to solve some very fundamental problems with democracy, that you're going to have all the civic tech, that the interfacing between citizens and the government is going to be much smoother, much better, that these companies are fundamentally good. Google doesn't want to be evil. Facebook wants to connect the world. Palantir wants to make

your data comprehensible. And I think it's also an underlying view that the answers to our

problems are out there somewhere in these masses of data. And if you can just make the whole thing legible, you could get the answers. And something poisons pretty quickly, I'd say, after 2014. Like that, that really feels like a different ideological moment than we're in. entirely like what was wrong about that or what would you add or change to my rendition of that optimism? A lot of that is true. The Palantir story that was told to

perspective employees and Alex Carp would do this a lot was that he most feared fascism that he had just finished being a German philosophy student and he was most afraid of fascism developing. And fascism happens when government fails to provide for its citizens. And they start blaming someone else for it. And people then feed that hunger and that hatred. And he couldn't do anything about the latter, but he could do something about government

failing to deliver. And so the reason that he wanted to do, Palantir was after 9/11, after this real rise in a feeling of being unsafe. Could we build the systems that would allow government to make people feel safe, but build it in such a way that was protecting privacy and civil liberties. That was the pitch. That was the fundamental idea was we were there in many ways to stop fascism. And how to work? Trump selected in 2016. That was a weird bit for with the aggressive

support of Peter Teele, one of the Palantir early investors. I mean, I don't know if we

would you call Peter Teele, Palantir co-founder? I think so. I think that's the phrase that is given.

But Alex Carp was very much fighting for Hillary at the time. And if you look at donations of employees at Palantir, they tell very skewed story towards the Democrats as well. Yes, Silicon Valley is very democratic in this period. Absolutely. Absolutely. You have a lot of Obama administration figures that they can't go to Wall Street anymore. That's not kosher for a Democrat, but you can go to Silicon Valley. Yeah, but that election 2016, but even more so his

reelection in 2024 is a real failure of that mission. And to now see leaders of the company and Silicon Valley broadly throwing their lot in with what I think is a fascist regime is a real real disappointing switch. Seared Palantir 2014 to 2019, you start, I think, as a data scientist, by the end, you're one of the people leading the relationship with the government. Yeah, I focused on the federal civilian side. Yeah. So what does that work? So that was work with the Department

of Justice with CDC to track epidemics, with veterans affairs to better staff their hospitals,

Give veterans the care they deserve in need.

agencies. How much is what we now think of as AI and generative AI starting to come into

the work you all are doing then? Not at all. And here's what I mean by that. Palantir was

aggressively anti-AI in that period. It believed that data integration was the true source of value and that AI was a magic layer that would be applied on top. And it was all marketing. And we were doing the real work that was getting data to come together. And can you describe what the difference is in those two? Yeah. What is that integration versus whatever they thought AI was? Yeah, well, so AI in a very naive sense. I mean, we'll talk about in other ways now. And this is

before agentic models and all of this. But AI is doing analysis of data. And before you can do the analysis of that data, it needs to be organized in a way that AI can make sense of it. But the actual thing that's difficult is organizing all your data together. That requires hard work. And there's no magic to do that yet. And the software plus engineers going on site

and doing a lot of that hard work to do the manual hookups. That was always going to be the true

source of value. So you're a Palantir across the end of the Obama administration and into the first

Trump administration. Yeah. Now, Palantir working with the government is a different animal depending on which government it's working with. Very much. How does that change? I was leading the work at the Loretta Lynch, Barack Obama, DOJ. And then all of a sudden the Jeff Sessions Donald Trump, DOJ. And priorities changed pretty drastically. The work with the banks was probably wrapping up anyway just because of time. But clearly, there was no more interest in that work.

The contract that we had had has choose three mutually agreed upon case types. And so I met with a new leadership after the transition. This is early 2017. And said, you know, what do you want to prioritize? What do you want to work on? And they said the opioid epidemic, which is a great. And we definitely want to do that work. They said violent crime. Cool as long as it's not a dog whistle. Like, yeah, we'd love to work on that. And then they said civil immigration. And I said,

we're not touching that. That's not the work that we are building this for. And I was in power. It is the lead of the project to do that. I had a contract that allowed me to because it was three mutually agreed upon case types. And while I was there and in the DOJ project, we didn't do any of that work. That's not how the decision went to every customer or in every project. So, Palantir during this period does begin working on immigration with the Trump administration.

I never worked on any of those projects. And so I was never, like, cleared on it. But to the

best of my understanding during that time, it was not stopping the Trump administration from using it for immigration. I don't think there was building of features specifically for deportations. But I could be wrong about that. But even the fact that they weren't going to stop it from being used in that way got a number of employees myself included quite upset. You leave Palantir in 2019. Why? Separately from man, a project that I never worked on. Palantir had signed a contract with

a department within ICE called HSI Homeland Security Investigations. And that during the Obama administration was focused on anti-human trafficking and drug trafficking. Sometimes counterfitting things that are not controversial and that everyone would support. And then when Trump comes in in 2017, they try to change the nature of that work. They try to get another part of ICE called ERO enforcement and removal operations. The part that everyone thinks of is ICE to get access

to the software and to use it for deportations. And there were a lot of conversations internally

of Palantir about what was actually happening. Some employees can always see that if we weren't

clear on the project. And a fundamental question came up of, well, why not write into the contract those same protections that we have elsewhere where we can say, don't use it for deportations. And eventually executives made clear to us that they were not going to do that, that they were going to renew the contract without putting in those guardrails and so on and then plans to quit. So there was a Bloomberg story that questioned this. Clearly coming from somewhere inside Palantir.

And it says that there was, shortly before you left, I think it said five days before you left,

warning from HR about sexually explicit comments you had made to a coworker. And then separately that when you did your exit interviews and you were actually leaving because you were burnt out and there was too much travel. So I want to take these pieces. Was there a sexual harassment claim against your Palantir and is that why you left? No. And this came out of an attack from

Executives of Palantir that are upset that I am pushing for AI regulation and...

called out Palantir's work in the past. As I told Bloomberg when they reached out, I had expressed

my concerns about the work with ICE internally. I had begun interviewing months and months before

I had an offer in hand. I then had retold a story of something that had happened to me on the job. Someone didn't like that retelling. I had talked to HR. HR had one conversation with me where I shared exactly what had happened and that was the end of it. There was no file, no letter, no none of the things that are claimed in that story. I dropped the matter immediately. You were in discipline and said the company or something. Nothing like that. And this seemed like what the Bloomberg

story said. I want to check it. The infraction was a story you told or something. You said not something done with or towards a colleague. Correct. I mean the story goes into it. It was now. Can I retell the story here? It was a paper goods manufacturer that was talking about uses of tissues. It sold tissues. The marketing department was talking about how tissues are used. I retold that example from the presentation on how tissues were being used in odd things that

had happened while working at the company. And then the burnout and travel side of it. The argument there is that you're making this claim that you took a moral stand against the ways being used, but actually you were just kind of tired of working there. As has been cited in multiple sources, multiple current palentier employees have backed me up that they heard me talk about ice and stand up and do all of that. I have no idea what notes they took from the exit interview. I asked

to see them. I was told by the Bloomberg report that she didn't actually have them that this had just been told to her by the executive so they could claim whatever they want on top of the notes that

again I never saw. I know what I had said before and during and that I had brought this up many

times. And a year after I left, palentier emailed and called me begging me to come back. Feels like if there had actually been a real thing there, they probably wouldn't have done that. So now this is, you know, you just heard me be fairly critical about palentier. I had before as well. The executives there didn't take kindly to that. And the super back that's attacking me is against any regulation on AI. And this is just another desperate hit by them.

I have been amused that the super pack, which is attacking you, which is probably funded by Jolon Stela, a palentier co-founder, that one of its core attacks on you is that you worked at palentier. Correct. That's a pretty strong level of political shamelessness. I would agree. I mean, so I would say lying about employees record. But they are very terrified. They are very afraid of me and office. And beyond that, they've said publicly that they are

trying to make an example out of me that they want to beat up on me so bad that when the idea of regulating AI comes in the future that politicians run in the opposite direction. And so they're not primarily concerned with what is honorable or what is true. They are concerned with causing pain. So 2022 you're elected to the New York State Assembly. In 2025 you passed the Razac,

which gets us into the AI regulations you're alluding to. This is one of the first major pieces

of AI legislation passed by any state in the country. But before we get into what does it do?

What was the philosophy behind it? When you're working on that bill and I know you had co-sponsors on it, what were you all seeing and what we all trying to achieve? We're seeing AI develop extremely rapidly and industry themselves warning about what was coming. This is after the letter that was signed by so many executives saying that we should treat the risk of extinction from AI equal to global nuclear war and promoting perhaps a pause. Many of them had signed voluntary

commitments with the Biden White House saying, you know, we are going to take certain safety

precautions and this is the first step towards binding federal regulation. And then we saw no binding

federal regulation come. And we also heard from companies themselves that they were okay with certain safety standards, but they're in a competitive marketplace. And that if they see their competitors starting to skimp on safety and cut corners, they would be forced to as well. So when you hear

that call, you say, okay, you should establish some baseline that people can't go below so that

there is some established safety standards that everyone is playing by. What's the baseline you tried to establish? There were a few provisions in there. One was that you had to have a safety plan that you made public and actually stuck to that largely followed best practices in the industry

Around how you were going to test the models for specific risks.

tests and what you would do with that information. That you had to report to the government critical

safety incidents which were specifically defined in the bill. If it goes wrong in these sorts of

ways, may not have harmed anyone yet, but could suggest something that's coming. You have to let

us know about it. And those provisions largely survived till the end. There were two others that were in the original that ended up getting cut out. One of them was that you can't release a model if it fails your own safety test. Basically designed for the way that tobacco companies operated, where they were the first to know that cigarettes caused cancer, but denied it publicly and continue to release their products. Or fossil fuel companies that knew oil caused climate change, but denied it.

We're saying if you knew your model was particularly risky, you have to take action on that. And the last provision was third party audits was saying that you can put up whatever standard you want. You can assert that you're going to follow it, but someone else should check your work. Not the government, but just a different party should come in. The same way we have financial audits, the same way we have socked to security audits, that another party needs to look at it and say, yes,

you are following this. And presumably you're working on this bill, what 2024, 2025? Yeah, for passes. How have your views on AI, the risks it poses, the questions it raises,

changed with the subsequent pace of model releases? I think things have happened much faster than

I thought they would. And I think our ability to pass legislation has moved much slower than I thought it would. And so that difference in speed between how AI is advancing and how government reacts is wider than I was expecting when I started on this process. How have you thought about the change in public opinion? Because it looks to me like we're seeing a pretty powerful AI backlash rising. You've pulled showing now, I'm more Americans are worried about AI than our enthusiastic about it.

There's a lot of counter-datacenter energy playing out the out the country. What have you made of how quickly the politics have sort of shifted? That's a brand new idea. Both how many people have focused on it, but also how bipartisan it's remained. You have all people know about polarization and most issues end up polarized. And this one hasn't. So far. And it is resisted that longer than I thought it would. That if you talk to voters, you see across Republicans, Democrats,

and independents pretty similar attitudes across state legislators, pretty similar attitudes. Even in Congress, there's more bipartisan shift than you would think. I mean surveys regularly show

that about 10% of people want to put the Genie back in the bottle and pretend it never existed.

And I empathize, but I don't think that's the way forward. 10% of people represented by the super PAC leading the future want to just let it rip. That's the super PAC that's attacking you. Yes. They want to just let it rip. They don't care how many people it hurts. Just how fast it moves. And 80% of Americans want to see some benefits but see a lot of risk and think it's moving too fast and want to have some say in its development. The fact that it stayed so bipartisan has surprised

me. And also the fact that it's risen up in people's minds so much as part of the pessimism around it's surprised you. And we were talking earlier about the period when there was a lot of optimism

about talk about software, about the internet. And I think you can really look from,

I mean, early computers, early internet, all the way pretty late into the social media era, you know, probably around Trump, I think things begin to turn. Cambridge Analytica, algorithmic feeds. But that's a long time. When these systems and technologies are present for people. And there's a fundamental optimism about them. I judge you, P.T. I think is when this really bursts into public consciousness. It's 2023. We're here in 2026 and the polling is already

turn negative. I mean, the week before we recorded this sample, it was targeted in two separate violent attacks. There was a multitude of cocktail thrown into his home, awful, to other people's shot that is door. I was a little shocked to see people celebrating these attacks online saying, you know, where can we support the bail fund? Yeah. This has moved into fury and fear and pessimism really, really quickly. Yeah. Why do you think that is? Well,

there was a separate split in AI around capabilities. The debate used to be, is this real, or is it stochastic parrots, but usually even before that is it just, you know,

slop that is never going to actually replace a human. That's the autocomplete, exactly, exactly.

So that we had these debates on one dimension, which was like, is it good for...

it bad for people? And then there was another dimension of like, how big an impact is it going to

have? And I think that debate's been collapsed. People are not skeptical of its power anymore,

or it is some art, but fewer and fewer each day. And so the intensity with which we're having that first debate has really ramped up. But I think it's also been that we saw what happened with social media. We saw what happened with these previous revolutions that were supposed to change everything for the better. And we've seen platforms established with great promise. And then over time, once they get power really turned on their users. And so people are no longer willing to believe the story

that is told about a technology or platform always benefiting people. And you see this argument

from some of the AI founders, I say, well, it'll be, it'll create material abundance for everyone. It will create, there'll be no more poverty. Everyone will have everything. And everyone's looking around saying, of course, that's not what's going to happen. You're a private company. You're going to profit. You're going to keep it all for yourself. Like, how are we going to force it to Sam Altman recently said, it'll be like a utility. It's like utilities are really highly regulated.

And so people are just not willing to believe that spin anymore. And yet seeing really quickly changes in their lives. Just been son, the AI writer, just what this kind of interesting piece on, AI populism. And I thought the way she defined it was interesting. And the little more subtle than you normally hear, which is she wrote, I define AI populism as a worldview in which AI's viewed not only is a normal technology, but is an elite political project to be resisted. And what she's

getting at there is AI populism, I think, and the AI backlash tends to include two dimensions.

One is that this technology is being overhyped. The other as it's often put to me and emails is it is being pushed down our throats. That it's not a thing people want. It is a thing being forced upon them. Now there's all this investment behind it. So the investment needs to be paid off. So the company's really have to do it. And that if you take the power seriously, you see it in a different way that kind of almost like any version of having any economy is going to be just a way of paying

off these huge investments that we're not getting at the technology we want. We are having a new paradigm forced upon us. How do you think about that? I think it's a beautiful description. What I hear from my neighbors is very much the feeling that this is moving so quickly that we don't have

control. And the American people so far have not had to say in it. So I think the first part of that

definition of the belief in its capabilities that part is shrinking as part of the dialogue is we're seeing it do more and more. But the fact that it is being thrown at us and we currently don't have

control. I think is what's motivated so many people to be thinking about AI. It is always struck

me. They'd be listened to the founders and leaders of the AI companies. They are very specific on the harms and the gains are very general sounding. So you'll hear Dario Amade talking about 50% of entry-level white color workers seeing their jobs automated away. They're actually our waymos on the streets now. You can see that those could take jobs from factory drivers and Uber drivers. There has been all this talk about existential risk. The sense that you could build something smart enough to

disempower human beings. There's a lot of specificity on replacing coders and then you get these very vague. It's going to help with drug development. It's going to solve material scarcity. And I think if you're a normal person being offered this technology that might make sure your 13-year-old son has a AI porn bot before he is a real girlfriend and you might lose your job and maybe there's some chance of human race doesn't maintain control over its own future. Why would he want to pause

on that? Absolutely. When you're seeing the harms day by day, whether it's your kid, the pedagogy at schools hasn't been updated and some people still think that a signing take

home essays teaches critical thinking. It doesn't anymore. And on top of that you see chat bots

and you see some of the truly horrific stories that have happened to teenagers. And maybe you go to your job and your company now has a hiring phrase. They're not laying people off yet, but they're not doing their usual hiring and you're worried about what's coming from that. Are you all going to be necessary in the future? And then you see your utility bill go up and maybe a data center is built near you. Maybe it wasn't, but you're starting to think about what's causing that.

And then on top of that, you see people saying, oh, yeah, and it might kill everyone. These are

The news stories that are coming in and you're maybe not seeing that benefit.

benefits. This is not a story of a technology that is just bad, but it's moving really, really quickly

and a few people are controlling the direction. And many people have lost confidence in government and stability to steer it. It becomes a question of if democratic institutions can govern this

technology before it governs us. I think pretty clearly. No. Well, I'm running a campaign to

change. I guess we'll talk about that. But I think being worried about how fast the systems are moving and having anywhere in a set all of how fast the US government now moves should make one worried. Absolutely. And so one thing you do see is proposals emerging to try to slow AI down

by functionally choking off some of the inputs. So there's Bernie Sanders, AOC bill to just have a

data center moratorium. There's some bipartisan interest in this run to Santa Santa and Florida has a bill that would be very restrictive on data center construction. Yeah. What do you think about that data center moratorium? The Bernie Sanders AOC proposal is a moratorium until we pass real regulation that protects people. I agree with that. I think we should pass real regulation today. Do you agree with the data center moratorium until we do? Well, I think what they are calling

for is that we need the real regulation. They don't think that bill is going to pass in this split Congress. They are setting the terms of the debate, which is why are we going forward with this until we've done the real work? And I think that's the right question to ask. If I could wave a magic wand and pass any bill I'd want, it wouldn't be the moratorium. It would be the regulations that the moratorium is calling for. But putting that is a negotiating tactic, I think is meeting

the moment in the scale. I mean, Bernie talks about the potential benefits of AI and also talks about the risks and downsides, I think he's been the clearest communicator on it. But you're right. It's a bipartisan issue. It is not one that is left right. I'm Vivian Wong. I'm a journalist at the New York Times. I've covered China for years and it's really, really hard to get information. I go on plenty of wild goose chases. One time

I went to meet a woman who said that she had been the victim of horrific domestic violence and was trying to get support from the legal system. She lived in a super remote part of South Western China, so I took a three hour flight from Beijing and several hours of train also. When I got there, local officials showed up, insisted on trying to interrupt the interview, and eventually they took her and her family away from their home, and so I had to leave.

One of the things that makes the New York Times unique is that it's willing to pursue all sorts of

stories, even the ones that might not go anywhere, because that's how you get the stories that

no one else is telling. This kind of work is in decline, but that makes it even more important. If you think so too, consider subscribing to the New York Times. In your framework for AI regulation, you have a somewhat different approach to data centers. You seem to see them as a kind of opportunity, an opportunity for what. They could be an opportunity.

And this is again, you need the regulation first. It's not, oh yeah, this will work in the future

and giving the political power of these companies. I would be very skeptical of them doing it unless we pass regulation with teeth. But the idea is that our electric grid is so outdated and so in need of updates throughout the country, but even here in New York. And it also slows down the

renewable energy transition, because if you want to have solar on homes, you need to grid that is

more responsive to generation happening in a distributed manner. And it's not right now. And we've tried to upgrade the grid. We need funds to do it. And the only options on the table are the government pays for it, which is tax payers, UNI, or it adds to our utility bills, which is rate payers. Again, UNI. And here comes an industry with fall intents and purposes and unlimited private capital that is really willing to pay for time. They are desperate for speed in building these

out. And so what I'm saying is you can set the incentives such that if you want to build a data center and you're doing x percentage renewable, it should be very high percentage. And you will pay

Not just for the connection to the grid and all the infrastructure that's nee...

also pay on top of that a fee to make the grid more resilient and help the upgrades elsewhere,

so that you can truly make the grid more green and more reliable. Well, then we'll move you to the front of the interconnection queue. And by doing that, we'll push your competitors to the back of the interconnection queue. And you set up a incentive to actually build things in way that

benefit us. Is it possible to do given the way our buildouts and infrastructure really work?

And the reason I've developed some cynicism here is I remember being promised the smart grid of the future in the 2009 American Recovery and Reinvestment Act. Yeah. And we didn't quite get that. I don't think anybody said at the end of that where our grid was now smart. And then we passed the inflation reduction act and the bipartisan infrastructure bill, which between the two of them had a lot of thoughts about energy generation and that and other things were meant to work on the grid.

And I'm not saying there were no upgrades made to the grid anywhere, but I am saying that I keep getting promised gigantic grid overalls. Yep. And then being told a couple years later, whoops, that somehow our grid is still this archaic mess where the biggest problem for getting new green energy online is we can't connect it. Your cynicism is warranted. 100% and I dare say you wrote a whole book on ways that we could make that easier to do. But maybe the difference

here is you have private capital coming up to do it. And the whole proposal is being precise on ways that we can expedite and by expediting shifting the ones that are dirty and not paying their way to the back of the line. So as I understand the theory underneath the data center approach, it's really that if all this money is going to flood into AI and AI is going to be at least in part built on the collective commons of the entire culture that came before it that we should

benefit. That is not just Sam Altman created some magic algorithm. Sam Altman and open AI and anthropic and grock and so on, inhale the entire internet, aid up my books in the books of

everybody else around and trained, you know, these systems on them. Even idea in there that I think

tracks this theory more closely than other things I've seen which is an AI dividend. Talk me through that. The AI dividend starts from thinking about how we can give Americans a real stake in the AI economy and it starts with humility that we don't know exactly how it's going to go. We don't know how disruptive it's going to be, but right now is the time to plan for the potential

outcomes that could come. And there's always been this conversation right in my econ classes

that ILR, it was that every technology revolution has always created more jobs than it's destroyed. Are youable? Maybe. But this is the first time someone's building a technology and stating that the goal is to replace all human labor. It is to be better than humans at everything and that the metric by which we understand how good the technology is getting is how well it is capable of mimicking different forms of human labor and then exceeding them. Exactly right. I mean you are creating a

replacement for human labor machine. Exactly and it's the first time that has been tried and it doesn't mean it will succeed but it certainly means government needs to take it seriously. And so the idea of the idea of it is what if we end up in that world where all human labor is replaced or just a significant portion of it is displaced. How do you have a society that is actually functioning

then and you have to start talking about a universal basic income? And the idea is to make sure that

we are setting up the structures now that would lead for Americans to be protected if we end up in that future. And I have a lot of things about how we can prevent that future changes etc but but the idea of it is almost that insurance policy and you could fund it via boring things like a wealth tax. You could fund it via a token tax. So putting a tax on the usage of AI maybe limited to commercial opportunities where you are replacing human labor or not. And that's a

fine policy as long as investment in capital always leads to more jobs which has been economic

theory for hundreds of years but maybe AI shifting that. And so if it's shifting that we need to shift our tax policy to be taxing AI and to be discounting hiring humans and token tax starts to get at that. But then the other funding mechanism that I talk about for the AI dividend is actually taking warrants in these companies large out of the money warrants where you say you know if the

Value of the AI companies were to go up and enormous amount then the governme...

to buy shares at a set price. They basically only pay off if one or multiple of the companies

are wildly successful basically if they are replacing all human labor. And if you institute that

now then VCs celebrate it and say you're participating in the upside and if you try to implement it after one of them or successful then you're seizing the means of production and seizing wealth. And so my ideas you go down all of these paths you start to find ways to have the revenue to actually fund universal basic income or investments in job retraining or just a broader safety net. But do it in ways that automatically scale and adjust and kick in at the speed of AI.

Here's a concern I've always had about this set of policies or this set of answers to the

problem of AI and job displacement. So I've been very very near the universal basic income available on time at wife, any Larry wrote a book on universal basic income called give people money. I store close with Dylan Matthews who did a lot of writing on universal basic income. And the trick of universal basic income to me which may be you support on its own merits right which is fine. But is under any plausible scenario of AI job displacement it is happening to some

people and not all people. And I see looking skeptically but I don't see a world in which one day we wake up and everybody's jobs are gone. It's going to start with some people's jobs. It'll start with some people's jobs. So if I thought it's going to be everybody's job all at once

I wouldn't worry about it because then we would just figure out a policy to compensate everyone.

But you imagine your team stir in you drive a truck and you're making 80,000, 120,000 dollars a year. And the autonomous truck companies put you and your fellow team stirs at work. And don't worry, we've actually passed universal basic income. No, it's only and you're now getting $37,000 from your universal basic income. 100%. And I'm getting $37,000 from the universal basic income. And I'm still here in my podcasting studio. You got screwed. I got to check.

What worries me, the most is I don't think we're going to a roll of full automation. But even if you believe we were is transition. And some people are going to really lose out and other people are going to be unaffected or or again. And I don't hear policy ideas that seem to know what to do with

the people who are losing out along the way, right? The people who are actually getting displaced,

not the world of everybody is displaced. But the world is graduating with a marketing degree. You are three times more likely to be unemployed than you were before. Or coders are suddenly seeing a contraction in demand for their services. But some coders are

making a ton of money. Yeah. Like how do you think about the differential here?

Universal basic income by itself is insufficient. And I would love to understand why you think we're not headed to a world of full automation. Because it's tough for me to see where that stops. Once we start on it. But we can come back to that. There will be a period of transition either way. I don't think it'll be all at once. So the idea is not just, oh yeah, we're all going to have this basic income. Because you're right. People will be screwed by that. The idea is to do a number of things

simultaneously, which include changing the tax code. So they were actually charging for the use of AI and discounting the use of labor. And that's a way to protect jobs and slow down the transition itself. It's investments, not just in universal basic income. But in job retraining programs and in structures that help people go into new careers. Now grant it. They have a really bad track record. It's my concern. Really bad track record. But it doesn't mean you shouldn't

still be investing in community colleges and finding ways to improve it as much as possible. But you're right. To just say that, oh, we're just going to give a universal basic income is not enough. We have to think about other ways of adjusting that transition, which could include when you have people who have a permit or training or a license that takes a number of years to acquire. Maybe you still require that for the transition for five years or 10 years. So people can

turn that training into equity. And that's another way that they have a stake in the AI economy.

We're going to need a lot of policy solutions. That's why the framework I put out has 43 different

ideas in it. But let's get very specific on this. And I want to come back to the question of full automation. But New York City is facing a near term question here, which is waymo the autonomous vehicle company. They have had permits to do the sort of mapping and testing here and needed to eventually roll out waymo in New York City. The way it's been rolled out in San Francisco and Phoenix and other places. And that set of permits have expired. And

you know, Bear Mombani has been I was a very non-committal about whether or not he wants to extend

Them.

city government that is committed to delivering for the workers who keep the city running. Those

workers also include our taxi drivers. So here you have this very near question, right? Waymo is a technological advance. They are nice to ride in. They are safer from all the data we have. They also will. If you roll them out in mass in the coming years, displace taxi drivers, Uber drivers, lift drivers. How do you balance that? It's a tough and ongoing question that the speed of the transition only makes worse. There are ways of, again, maybe you require Medallion for Waymo's

for a set amount of time and that's what enables some bit of transition, but then you're only

protecting the Medallion owners and not the drivers, right? But that's maybe a piece of what

that transition looks like, especially for those that have gone into a huge amount of debt to buy

that Medallion. You think about job retraining in other places that can go in. You think about a broader safety net, but we don't have a full policy solution for any sort of disruption that happens this quickly. It just hasn't been developed and we need people in government that are willing to take that problem seriously and look for solutions that aren't just stop or go because this technology is coming, but so what's your your version of solution for Waymo? Because Waymo is

interesting to me or autonomous vehicles, right? You can think of many different companies trying to do this even more so than I think at least the public conversation around generative AI, where I think the gains, which we can talk about, it has been sometimes hard to see what they are in the way people talk about it, driverless cars really do have gains. A world of driverless cars is safer. There are a lot of people who have mobility issues right now or discrimination issues and getting picked

up and all kinds of things where they could really be helped. They are just kind of fascinating technology, you know, you're not going to have people falling asleep and then hitting somebody on the road, slowing them down has a cost, a cost in just a convenience. People might experience, but also a cost in safety, a cost, potentially in live saved and speeding them up has a cost in display spent. So you said we need politicians willing to take this seriously, you're a politician,

you're looking to take this seriously. Yeah. What do you do? Well, I've said a few different options and things that we can do together, which is the way Mo keep going. Is it, that's the end

to the, you'll charge Waymo from Italians and the money goes into the coffer, who gets that money?

I think you can specifically be focused on job retraining and on people who are displaced and you can try to share the benefits in that way is a portion of that answer that we have to go to, but the real question is should we be investing in Waymo's or in public transit? Like we have a great system to move people around and we actually need an investment in improving that. So

I took a Waymo for the first time in LA and it was a light rain for New York City standards,

but I think a thunderstorm for LA standards and I got in the Waymo and it went 20 feet and it pulled over to the side of the road and just said dialing support didn't say what? No, why it was calling it, et cetera. And I've found out later, it turns out almost every Waymo in the city had done it at the same time because it couldn't handle rain and so support timed out and I was sitting there for 12 minutes. My first Waymo ever went to call a new bearer or lift or something and finally

support came through and version was like, oh yes, seems like you're stuck like I'll drive you out of there. And so I have questions about how they function in the rain in New York City and I have questions about when the backup is human drivers. It seems like it's another format of outsourcing as well. So yes, in the long term theoretical will autonomous vehicles be safer than humans in most cases. Yes, but to say that we are definitely there right now, I wouldn't say we're there necessarily

right now. It's only in the conditions in which they're willing to do them, which are quite limited. Like you can't take a Waymo from San Francisco to Phoenix, so I take one inside San Francisco

or Phoenix. So all of that does to say, I think it's a, this hypothetical of they're ready to go and

be safer right now is not, but it's not right. But I think they're safer in the place they drive. And the reason I'm pushing on this is not because I'm pro Waymo or anti Waymo, it's that there is a question that public officials are facing right now about how quickly to move forward into that world. And you know, Zora and Mom Donnie could extend the permits and accelerate Waymo coming to New York City, or he could drag his feet and keep it out of New York City.

And then there are some ideas in the middle about maybe you could have Waymo paying high prices, but even to the extent you're doing that, what you're doing is pulling Waymo in.

I think people sometimes don't quite want to face up to that there is a yes o...

some of these issues. And you know, in the long run, do you want to protect the jobs of taxi drivers?

Or do you want to have autonomous vehicles operating inside of your city? Is a kind of yes or no

question? I think as Kane says in the long run, we're all dead. There's a question of speed, not yes or no.

And I think most people here are from zero to 100 somewhere between 40 and 60. And we're being described as yes or no. I think it's not ready right now for the environment of New York City. It will be ready sometime in the future. And like with a lot of AI, we need to be thoughtful on that transition on how it benefits people and how it hurts them. I think it is almost easier to imagine ways of handling the financial consequences of AI for people, even though I don't actually

think we figure that out, then the consequences for their dignity, for their purpose. People train

for jobs. A job is part of their identity. And then all of a sudden it's getting taken from them. And you're going to say, hey, taxi worker over here at the community college, you can retrain to be a home healthcare aid. That there's something here that we're going to have to balance. You know, the economic efficiencies or pushes forward with the basic deal we offer people in this country, in this economy, which is that, you know, you study for something, you learn how to

do a job, you apprentice, and that we value you for doing that. And then we're supposed to treat that as having value. I feel like we don't talk about this dignity dimension enough. So I'm curious

how you think about it. I think it for so long humans have been defined by their job,

and that's become a piece of the dignity that you in this worldview have purpose have

value because of the thing that you do. And that's been ingrained in people for a while. And if we keep that mindset, then you be as an extremely disappointing answer to it. And I think for lots of reasons, it's not the full solution. The world that is painted by the AI optimists is we're going to get to this post-working area where people no longer derive their purpose from work. I'm skeptical. We'll be like the British gentry. Yeah, I'm skeptical.

I'm skeptical, but you believe in full automation. So then you think we're going to distopia on our current path. Yeah. But I think we have the chance to change it. When you throw the ball down the field, mentally, if you're skeptical, what is the good outcome here? What is the good outcome of we have automated away? Would you seem to think it's very possible? Yeah. A L is very large percentage of the economy's jobs. And yet what we have is something

better than at least where we've been or where we are. It would have to be at the point where it's not just your basic material needs are met, but the standard of living is higher than it is now, where you can go about your day and be in a better place than you are right now. And this is an perfect analogy. AI is different in all kinds of ways. But if you look 100 years ago, the average American worked 60 hours a week and had a much lower standard of living. Now,

the average American works 40 hours a week as a higher one. We could get to one where we work 20 hours or 10 hours and have a higher one yet. But we are able to do that transition because workers had power because Americans had political power because we were able to shape that technology to work for us either directly through legislation or by organizing unions and doing it indirectly at the workplace. If this transition happens too quickly and we lose that political power,

it doesn't just happen. So I want to talk about something where we already are seeing the effects of it. And you talk about this is very early in your plan, which is kids. And one of my

theories of legislating having covered a lot of this is sometimes a crucial thing in building

legislative capacity is to just find places where there is enough consensus to legislate a bit. So people learn about the issue and learn how to legislate on it. You know, there's all kinds of experiments consenting adults can run on themselves. I am pretty worried about the situation with AI's and kids and we really don't know what it's going to mean for kids to have relationships with AI's and to grow up where they've got AI friends and so on. What is your approach to

kids in generative AI? I agree with you. I think kids in some ways need more protection

We don't know a lot of the impacts that AI will have.

where it can benefit kids. I mean I can imagine a world where having a personalized tutor at exactly

your level in each subject and able to communicate with you in exactly the way you like to learn as a supplement to what you're getting from teachers in the classroom and your parents is a helpful thing but teachers and parents need a view into all of the interaction. So we need strong data protection and I think broadly a lot of these products even when you think if some teenagers should be allowed on or not need to be thoughtful on the mental health impacts. This is a really

scary period and we've seen the big stories about chat bots but then we've also seen like chat GPT integrated into teddy bears and things that just feel really unnecessary. So what's

in your plan on this? What do you actually want to do? So age verification for certain aspects

of these interactions, the mental health checking as I said, engaging in updating pedagogy, making sure that teachers and parents have a view into any interaction that goes with AI, broad protection on training of kids data and data privacy aspects as well. And yes, we need to

prepare kids for the jobs of the future. I don't think you should shut off access to AI people should

be exposed to these tools as they are in high school in college and getting there but being really thoughtful about what those interactions are. When you say updating pedagogy, what do you how do you want to update it? Well, so you can still assign essays but if you just do a take home essay people are just putting it into chat GPT. Everyone knows this but I've done a few things where high school students come up to Albany and when the teacher leaves the room I say how many of

you's chat GPT to write an essay every hand goes up. So should we be requiring essays written by hands? Should we require them written in Google Docs or program like it so you can actually watch keystrokes being entered, right? Just updating for the tools that are up there and making sure the old way of teaching is still teaching. You know, we're, I'm hiring for something right now. And it has really disoriented me that cover letters are now completely useless. You know, I've been

involved in hiring for hundreds of positions now, given my time at Vox, and cover letters

are always quite important to me as a way of assessing how it may be somebody who's qualifications

were less obvious for the role, but you could see in the way they wrote an unusual mind at work. And now I'm not saying that's completely impossible, right? You can still write a great cover letter, although increasingly it's getting a little, but it is getting harder and harder to know what you're looking at. Like are you looking at somebody who is, you know, great mind at work or you're looking at somebody who's side-borging it with an AI system and should maybe

that's fine because that's world and somebody who's very, you know, fast-out using them is actually showing they have a skill that others don't, but on the other hand actually want to know how the person thinks, not how good they are prompting to completely knock out our ability to evaluate somebody's writing skills. Can I ask, yeah, any of your current employees, obviously, but people you've interviewed, have you noticed a loss of just skill in writing? I haven't noticed it yet,

but I would say I have not hired since AI got good enough. I've definitely noticed it. And I think

people underestimate this because they're used to the quirks of poorly prompted chat GPT writing and it is incredibly incredibly easy to spot. Yeah, but if you know how to use the systems and you're better at it and you're using, you know, more advanced forms of judgmentity or clutter Gemini, you can't tell. But I think when you ask people the right things, it's just not, I think there's been a few years now where that skill is not being taught. And you have pointed out that writing

is how many people strengthen their ideas, that the work that goes into that is part of the work of thinking. And I have noticed as people have, again, not speaking to anyone I've hired, but people have applied or others that I think there has been a decrease in people's ability to write well and express their thoughts clearly and do the editing work. So one thing in your AI framework, that I was interesting was that you want to expand the government's capacity on AI. What does that mean? It means

making sure that we have the expertise within government to understand this technology and help contribute in a positive way to its development. And this has been horribly under-invested. And so

we're not taking this technology as seriously as we need to. This is the first major technology

That has developed basically without any government progress, any government ...

I'll go or didn't invent the internet, but DARPA did develop the intranet that became the intranet. And even the space race was obviously primarily government-led. AI was completely developed in the private sector. I mean, some grants on research, but it was done outside the structures of government. We need to be hiring in the expertise within government if we are going to help to govern and lead to good outcomes here. Can we do that with the way government

hires? I've run into this question before talking to people inside the federal government, inside state governments. Government hiring for very good reasons has structured pay scales and worries

about horizontal equity and a million things that make sense when you're very worried about corruption

and patronage and favoritism. The market for top AI talent is insane, right? What medible pay you, what Google alphabet will pay you, what open AI, what anthropical pay you, what they can pay you. I don't think any of them are going to pay me, but yeah, not you specifically, but one. There's a question of not cutting funding for the parts of government trying to do this, but there's also the question of how do you just make sure the government has the staffing

talent to keep up in a market this ought? We absolutely should make it easier for government to hire experts and to pay more in order to compete in that way. I mean, we've found a way to

let states directly fund more hiring. It's usually the football coach in any state. I'd rather

it be a real AI expert that's working to make this future actually work for Americans. I want to get

you to expand on this a bit, because I think it I think is we're hearing a lot of reports of

anthropics mythos, which I have not had access to it, so I don't know how good it is, really, at hacking every computer system on the planet, but they are saying it is very capable of that. And I think you really quickly, if we're going to have AI companies creating what are functionally cyber superweapons, the ability of the government to actually oversee these systems becomes pretty paramount very quickly. I think anthropic is an interesting place and is posing a lot of

governance challenges in opposite directions at the same time. On the one hand, you can't just

have a private company creating cyber superweapons and hope for the best. On the other hand, we just watched with the anthropic and department of defense department of war controversy. When you're dealing with the Trump administration, do you really want this kind of quasi-nationalization

of labs? I think we're seeing simultaneously that it is uncomfortable having these systems as

private as they are. It is uncomfortable recognizing that if the government gets its hands on them, they could be used for whatever a particular government's purpose is might be. And so it's left a lot of us who care about regulation and care about governance in an awkward spot. It is deeply uncomfortable because we are talking about such extreme power and it's a question of where that power lies. If you take as a given that there will be a superintelligence developed,

which I don't see any reason why there won't be at this point, then of course it's an uncomfortable question about where that sits because you're talking about something that is smarter than any human effort. That is a real power question. And this is a real question that needs to be settled by policy, that needs to be settled by law, that if you're just leaving it up to the winds of an executive branch where there's no restrictions on them or private companies where there's no law,

both of those feel deeply uncomfortable. This is why we need Congress to step up to the plate and actually decide how this division should happen. So in the answers you've given me, two things that have come clear in the background of the way you think about this is one,

you seem to believe we're going to go to full automation. That's what tomorrow,

but you act with a lot of skepticism when I said I didn't think we would get there. I think there's a significant likelihood and we should take it seriously. And the superintelligence is also a real possibility that we're not necessarily going to stop at human level or even like a bit beyond your average worker that we could be soon dealing with something. I think for a lot of people, they would hear that and say, so why not stop it? Why do we want a superintelligence,

the machine God that will put us all out of work that we have no guarantee we will know how to control. If this is your set of views, why move forward as opposed to trying to throw your body on the train tracks? Well, I don't think right now metaphorically found your body on the train

Tracks will make a strong difference.

we've made a lot more progress on the alignment problem. I do think we're getting into really

risky territory. What you need and one of the sections of the plan is about diplomacy, it's about international action. We should be engaging with other countries should be engaging with China. We should be building universal verification systems on what is happening both at the chip level where you can look at the geography and how it's being used and in the models themselves. We should be trying to lower the temperature on their being in arms race. So, yeah, I am worried.

If I had a magic wand, I would slow things down until we had better guarantees about what we were

stepping into and where we were going.

Did you know that India is the biggest adopter of crypto globally?

In that Estonia offers online voting in all its elections. I'm Katherine Benholt, host of the world, a new daily newsletter from the New York Times. I spent 20 years reporting from more than a dozen countries and it occurred to me one day, you know, what kind of newsletter would I like to read? I don't live in the US. I want something that's written especially for a global audience. Something that helps me understand what's going on and why it matters.

And ideally, something that doesn't just get me down. The world is just that. Each week day morning, we bring the biggest stories, dispatches from my colleagues on the ground, and a few delightful surprises with video too. The world's newsletter from the New York Times. Sign up now at nytimes.com/theworld to get it in your inbox each weekday morning.

So now I want to flip the valence of this conversation. We've been talking as I think most of the

AI conversation does about what I would call AI harm reduction. If this technology is moving forward, how do we make sure it causes as little harm as possible? But I think for people to want this technology to move forward for it to actually even be conceptually a good idea for this technology to move forward. I think the case has to be better than that. And we were talking earlier about many ways like the absence of a positive vision for AI. These companies have to make back,

you know, in the coming years, a lot of investment. And as best I can tell, the business model they've come up with is replacing white collar workers and to some degree subscription fees for people asking, judging me to look at them all. What I've been wondering about for some time is

all these promises of AI for drug development to AI for energy innovations. What would it look like

to have a public agenda that actually tried to make that real, that actually tried to make it such that there was more AI development that went in those directions and that we got more out of it. So, I mean, I've heard you talk before about your interest in AI drug development. I want to hear your thinking, even if it's not a full policy agenda, on what it would mean to have a positive agenda for AI where the public sector is shaping this towards social good,

it's supposed to simply private profit. We would build out an initiative that we've done in New York called Empire AI, which was that the state government bought a large cluster of GPUs and committed to continuing to build that out and gave our public universities access to it. So, they could run experiments at a much cheaper rate and made a public investment on a research front to go after lots of things, including AI alignment and as safety, but we could be

directing grants to that specific research and we could be building the infrastructure in the government to make that cheaper. I absolutely believe we should be trying to use AI for good and New York was the first state to do this. Now, there's a following, but the federal government has the resources to really do a deep investment here. And yeah, for a while, AI benefits have been riding on the story of alpha fold and solving protein folding,

which was an incredible advance and has sped up drug discovery, but there could be more like that

out there. There are definitely more like that out there. If there's not that we got,

Then we've been sold a bill of goods here.

making use of this technology for a good and directing research in that way that doesn't, by the way,

solve alignment problems. It could be that you want it to do really good things and then actually in pursuing that, it goes off in a whole other different direction. But yes, that is a good use of

public investment. So let's focus in on drug development for a minute, because I think it's in

some ways like the clearest case. I mean, GLP ones, friends, and it's our revolution right now, but they're actually a quite old drug. Been around for decades. And all of a sudden, we have all of these new candidates either to develop or to test. Let's say you imagine what certainly seems possible, which is that in the next called three to five years, AI systems begin generating

a pace of molecules worth the of investigation, either new molecules, or existing molecules that

the estimate scour of the data and realize they might have other uses. But if you know anything about drug development, you have choke points all across that process. There's what the FDA can do, there's getting, you know, everything from rats to monkeys to humans for trials, that a world in which we suddenly had more good candidates would be a world where the choke points became something very different. This gets a little bit more towards the way you were thinking

I think about the grid, which is, if as going to create, if we imagine AI will create all this pressure for investment and it'll create all this like demand for something, how do you use that pressure to open up parts of the system, that have been clogged, that if someone did disrepair, right, like how would you make it possible for your economy to actually benefit from AI, which requires operating not just in the world of, you know, probabilistic predictions, but actually the world of

things of steel, of cement, of, you know, human beings who are willing to sound up for a drug trial.

Well, that's why there's more to my platform than just the AI. I'm giving you a good opportunity to

talk about it here, but we have to cut red tape and cut regulations. One of the ways that I have used AI already is I put every statute in New York State through an LLM and asked it to identify laws that are out of date that require paper when we could do something digitally, a bunch of ways of checking that we have requirements that are just getting in the way of getting things done. But Jen Polk on my call, the policy croft that develops over time and put together now at

60 page bill for this session of just pulling out a bunch of these old requirements that are getting in the way of doing things. We can do the similar thing with regulations, not just with statutes, but where have we developed practices that are now in the way of moving forward and drug discovery or broadly? Yeah, we need to change policies that stop government from getting things done and sometimes that's in technology doing the thing more efficiently. Sometimes that's in using

the technology or not, but finding ways to identify choke points and find ways to alleviate them. Or we're talking it's tax week. A lot of us who waited until the end of a better taxes this week. And it was already possible for the IRS to pre-fill a tax form for most Americans who have pretty straightforward taxes and lobbying has made that very hard and the Trump administration has made that harder. But it would be fundamentally as a technical matter. Trivial. For there to be

through the IRS, a tax preparation AI system that every American had access to, where they uploaded their forms, it's cross-checked with IRS data. And it did their taxes for them in seconds, you know, saving people a lot of time and energy. Like the capacity to actually have give every American in AI account in under the auspices of the IRS. If we don't do it, it's not because we can't. You know, there's a real question of whether or not the obvious allowed people to do that.

But the relationship between people in the state could really be transformed, if government

chose to transform it. 100%. And I think we need to make that a priority. So I have a bill,

I've been pushing for for a few years to make it easier for different agencies within New York City to share data that you give to them for the purpose of signing you up for benefits. So that if you they sign you up for one benefit, you can automatically be a sign for another one. That right now is

restricted and we should change that. Obviously New York City invested like $100 million in building

A portal, but actually what we need are changes on the back end of laws that ...

share that data. I'll go step forward, which was I was speaking with the tax department in New York

state and advocating for okay, free file, it makes it easy for you. You don't need another software,

but why can't we just do it for New Yorkers? We have a lot of the information is in New York state department. The answer I got back is that so much of the information we have is actually wrong.

They had this need to just improve the data internally first. And I said okay, why don't you just

find companies that are wrong or build systems to help them and they're like we're working on that, but like give us five years. Like that's where we want to get so that we can automate it. So maybe it does come back around to data integration and just having the data correct. And it might not be any more that the technical aspects of how to do your taxes is the limitation, but just as the underlying data that we're feeding accurate enough for it. I guess the

principle I'm trying to get out here is to the extent you don't believe we're going to pause. I'm not saying you don't, but one doesn't, right? That we are going to move forward at some

pace here, which seems likely. I think actually benefiting from AI as a public

is a harder challenge than people have given it credit for. I don't think just because the systems get better. There is necessarily a public benefit. There could be individual benefits, individual harms. But if we want drug discovery to accelerate, we need to open up the systems that would allow drug discovery to move faster. You know, if we want the relationship between people on the state to get cleaner, we need to actually create the conditions for it and

overhaul very, very, very difficult and archaic and multi-layered and airfield, you know, government databases. And it's interesting because I do think right now throughout the private sector, you see companies, you know, with greater and lesser degrees of success trying to figure out, like, what does it mean to rebuild our self to use AI? Everything from how teams are structured, how our data works. The government, you know, because it doesn't get competed at a business by new

governments, you know, is working on much older systems and it's very, very hard to build them.

But I don't know. I think for AI to be worth it, you're going to need a lot more of this kind of

investment at a much higher level of the ambition. And like right now, I'm not saying, we don't even seem to be able to legislate on the harms very effectively. So I'm not confused just why we are focusing there. But I do worry a bit about it because there's a world where we've done some reasonable harm reduction legislation and done very little benefit from it. And that's world where we've kind of push AI towards being a worker replacement machine, as opposed to having like a public vision for

what we want from it. I 100% agree. And this is the hard work of governing. I don't think these are maybe the easy places where we can build the legislative muscle. I would hope so. I think that's probably around kids. But I think these are parts of the places where we have to work together to change that. And part of it will be on AI and setting up incentives and part of it will be building the infrastructure that allows that to happen. We're talking a lot about pretty high

concepts here. One of my first bills in the state legislature was to help the state get on cloud computing because it mostly uses mainframes. And the speaker of the assembly, most uses mainframes

in 2023. Yes. Yes. The speaker of the assembly codes in foretran and I always joke that his

retirement plan is going to be fixing all the state systems because they still run on foretran. There's just work that needs to be done on modernizing to allow us to take advantage of the benefits and that will require both direct investments and a lot of legislating to encourage that direction. So one of the reason I want to have this conversation with you is you've ended up whether you wanted to or not a bit of a test case for all this is going to work. See you're running for Congress.

And there is, as I mentioned before, the super PAC that's funded by co-founders of Palantir, Open AI and Anderson Horowitz. I spent a million dollars opposing your campaigns so far.

Just to expand so far. Oh, two and a half. And suggested they might spend up to 10 million.

You know, at the same time, I've looked at some of their statements. Greg Brockman, who's one of the Open AI founders and is a major donor of this PAC, he said, being pro-AI does not mean being anti-regulation. Means being thoughtful, crafting policy, secure AI's transformative benefits, while mitigating risk and preserving flexibility as the technology continues to evolve rapidly. So what's their problem with you? If they really truly believed in having

one national framework that regulates AI and balances the benefits and risks, they'd be supporting me.

I think it's a difference between what they say for marketing purposes and wh...

believe and their actions portray that. So Open AI last week for least a policy document that

mirrors a lot of my policies. The emphases are different. I wouldn't say that. I felt it. Parts of it. Yeah, they're like we believe in the 30 to our work weekend. Yeah, yeah. But they did say

they wanted third party audits, but sometime in the future, I think we're already there. And there was

much more of an emphasis on society dealing with the problems after the fact is opposed to restrictions on the developers, right? I'm not saying it's a match, but they put forward some policies there. And they also put later in the week policies specifically around kids out that included safe harbor provisions, it included testing, encouraging red teaming of model. So when you

read team a model or read team any software, you get people to try to intentionally break it and

do something that's not supposed to do. And you might want to read team it around producing child sexual abuse material to make sure that it can't out in the world. And right now in every state in the country, read teaming it and producing that material would be illegal. We have a no tolerance policy on the production of the material. And obviously no DA is going to go after you for that. But one of the things they talk about there is they want to extend safe harbor provisions

so that you can actually encourage red teaming. Yeah, I mean, this is my concern that I've heard this from people on the hill, like people in the Senate, the list of suck and set a version of this

two-round the record that at the exact moment that AI is becoming so powerful that it would be

irresponsible for Congress to not be starting to construct regulations, legislative structures, transparency, kids that the AI industry now has so much money that much as crypto did before it, it's able to create a kind of super pack of, you know, that has like a death star like capability. Now it's weird because anthropic is, you know, one of the funders of another pack that is sort of more pro-regulation and is supporting you. So you've players on both sides. But a world where AI

will have this much money and the political system is this permeable to money is a world where, in order to regulate AI are going to need to have to sign up your own AI patron to support you. And so I feel like there is some bigger question of political economy and power here that has

ended up getting a bit of a test case in this race, which is, I think quite worrisome.

I disagree with you very, very quickly end up in a scenario where politicians are terrified of the issue. And that's the goal of leading the future. The goal, as they've stated, is to extract so much pain in this race and to beat me up so badly that when the idea of AI regulation is proposed in the future, politicians run in the other direction. I mean, they have said publicly that they want to make an example out of me. Think about what that means. Not that we have a

different view and so we want to make an example out of Alex Boris. And they want to do that not because I have ideas that are outside the mainstream or, you know, when I proposed my framework, I got praised from those on the left. I also the Chief Futurist of Open AI retweeted it. They're coming after me because I successfully passed the bill. Frameworks, there's lots of frameworks,

those are cheap. Who's going to put political capital forward and get something actually done?

And they tried to prevent any states from moving forward by putting this preemption language in legislation that failed. So they instead got this executive order from Donald Trump to target states that want to regulate AI and try to extract punishment. They would cut off funding that they would sue the states and it targeted the race act, along with a few other bills throughout the country. So why are they coming after me? Because I might actually get a bill passed.

What? This goes back a little bit in our conversation, but what actually in the race act? Do they fight? Because as somebody cares about air regulation and I think it's a good start, what actually got enacted there is a pretty soft bill. It is so when they strongest AI safety bill in the country and I'm embarrassed by that fact when they when they come after it when they're trying to get a changed what are they so upset about? It's that there's any regulation whatsoever that really

is the challenge and that there is any regulation that they have to play by any rules is such

In anathema to them and they don't have to win forever.

cycle or two. The speed with which AI is developing the amount of political power let alone

capital that they will have to deploy in the future will be unbounded. We already have

elected officials who are terrified to take up this cause despite how popular it is because they see all the money on the other side and the risk of Earth. I'm running for Congress. I talk to every member of Congress I can and I hear from them in quiet conversations. Yeah, we're watching this race. We want to see if this is a issue that you can win on standing with people or if the money just swamps everything and the lesson that will be learned by

members of Congress if the super PAC wins is run the other way is don't actually touch. Maybe you can say a speech on it maybe you can go on a podcast about it but don't try to pass the bill because they will end your career. They got some place to end. I'll also find a question.

What are three books you'd recommend to the audience? The first is my favorite book of all time

and I know you have thoughts on this book but it's a theory of justice by John Rawls. I think it does the best job of setting up a broad framework of rights of humans while also understanding when inequalities could be justified and I think it's the best place to start for political philosophy. I know you've tried it a few times. I will point out that in the intro,

he says this is a third of the book that you have to read to get the basics of it and here's the

half of the book you have to read to really deeply understand it and the rest is for the academics and so I'd encourage you to give it another try. A theory of justice by John Rawls. The second one is world leaders by Catherine Bracy which is marketed as this like deeply anti-VC book but I actually think is written by a tech insider and a much more nuanced approach to the incentives that venture capital

sets up and that is always for growth growth growth and don't think about the social consequences

and I'll add that since VCs are always pushing for a company that will scale no matter what. I saw this happen to my wife who's a YC founder and built a business that probably could have been fine on its own but had had the venture investment and it was scale or die and so a lot of the

negative externalities I've come from that I think it's a really timely look as we are building

out AI and the last ones I think a little more whimsical because back to our conversation about the skill of writing and it's bird by bird by animat which is just a delightful read and is a good reminder for any procrastinators to just break down your your work and do it bird by bird that's where the title comes from but is such a well written leads by example and in the instructions on the art of writing and I encourage especially when our skill of writing is being degradated for people to

the intentional and that practice and to read that book. Alex Boris, thank you very much. Thanks for having me. This episode of this Clunches produced by Annie Galvin, fact checking by Lori Seagull, our recording engineer is Amin Sahota, our senior audio engineer is Jeff Galvin with the additional mixing by Isaac Jones and Amin Sahota, our executive producer is Claire Gordon.

The show's production team also includes Roland who, Marie Casione, Marina King, Jack McCordic, Kristen Lynn, Emma Calbeck, and Yon Kobel, original music by Pat McCusler, audience strategy by Christina Simulusky and Shannon Busta, the director of New York Times spinning audio is Annie Rose Strusser.

Compare and Explore