[MUSIC PLAYING]
Nah, there are no films for such things.
“Besuch the red-cappuccino-leadness world in Freiburg”
with Euron Mehlitz, Dörr-Umer, and, at the end, all the Typen von Neben are one of them every year. Endecked our interactive exhibition by the elite tournament, Adiogheite, and a classic
and the next Parvillon, the whole world from red-cappuccino, the red-cappuccino-leadness world
is only one second long ahead.
In Greek and in Greek, present. Jobs, by which you don't have a lot of people. [MUSIC PLAYING] Articles of incorporation, under which OpenAI was set up as an on-profit,
make very clear that they have a legally binding fiduciary duty to do certain things and what we chart in the pieces, this kind of years-long process of the company kind of converting itself out from under those restrictions. It's the Law Fair Podcast.
I'm Keklannick, senior editor at Law Fair with Ronan Farrow, and Enter Morans, writers at the New Yorker and authors of a huge 16,000 word piece, published last week in the New Yorker, titled Sam Altman, "Make Control Our Future."
Can he be trusted?
Sam Altman is one figure.
He's a consequential one, but he's especially consequential as a window through which to view a wider dynamic that critics in the piece describe as a race to the bottom. Today, we're talking with Ronan Andru
about the legal and national security implications of a number of things that are revealed in their article and discussing what some of the policy options might be going forward. So I want to start for just to take a moment
for the better just rule of headlines, which is that if you ask a question of headline as you do, the answer is always no. So in theory, this could be a very quick interview. No, we can't trust Sam Altman, but there is actually so much
here much more than just kind of a profile, so to speak of Sam Altman. You guys spent over a year reporting this, conducting a hundred plus interviews, and it really brings the receipts on a lot of information while putting it into a full narrative.
“And so there's a couple of things that I think have been”
under-reported as news and might have legal ramifications. So I wanted to unpack those because it is law fair.
First, the general volume of lies
or the lack of public disclosure to the corporate board and the possibility of not legal fraud, certainly fraud in the colloquial sense to the public. Second, I want to talk about alignment and trust in safety issues if we have some time.
And then third, the national security implications, obviously, that we're in the reveals that you bring up. And so let's start with the fraud. Not necessarily in a thorough no style. It's not like you have a product that doesn't exist here, right?
But what you do have is a product that Altman has been putting out there with OpenAI that continuously makes promises on a level of safety and a commitment to safety as you expose over and over again in the piece. That he doesn't make good on or backs away from.
And there's comparisons in your piece to SPF or to Elizabeth Holmes. So what is your takeaway and what you were hearing about potential kind of public accountability for the way Altman was saying one thing
and doing another, both in terms of product safety and then also in terms of conversations with the regulators? - Yeah, I mean, there are many layers of this. And maybe I'll just sort of take a couple
and then Ronan, you can take whichever ones I don't get to.
“But I think I should just say really plainly.”
We are not alleging in the piece or here that we like know that OpenAI or Sam Altman broke this or that law at this or that time, right? The purpose and I think the reason I bring that up is that this is sort of why this a lot of this stuff
has been in the realm of kind of, I mean, the accusations as you say of borderline fraud or potential fraud are many in the piece and it's an interesting mix of like what you don't have is this sort of smoking gun thing
where, you know, the reason Altman was fired in 2023 is that he was caught red handed, you know, embezzling funds or, you know, it's not that kind of allegation. What you have is this kind of very slow buildup and accumulation of facts such that they're very hard
to kind of put together in one place which is why this took so long and it is, I mean, since you point out the fact checking and since you have experienced with it, we literally did have the best fact checkers in the world
on this and we really did get it so bulletproof that it's kind of hard to contest. But to the question of like which frauds rise
To the top or allegations of fraud,
one of them is we talked to a lot of economic experts
“who talk about, you know, what they call circular deals”
in the industry and this is not unique to OpenAI, this is kind of a lot of these, what are called hyperscalers. You know, there's really just a few of these companies building this incredibly highly leveraged risky product that requires massive unprecedented amounts
of capital expenditure in order to build the infrastructure to make the next model. So even if you are extremely bullish on the economic potential of the product, you could still think this is a bubble
and in fact, Sam Altman at times says this is a bubble because even if the product is, as you say, you know, it's not a figment of the imagination, it's not like a tulip craze. It could still be a cause of this frothy investment
and you know, you can get into the details of how when OpenAI and Nvidia strike these deals, for example, they're kind of both investing in future products that don't really exist yet and that depreciate very quickly.
And so there are all these sort of micro economic allegations of what people call borderline fraud or circular deals. And then to your question about the safety stuff, you know, I think as a baseline, it's very easy for people to look at a tech company
and say, oh, are we so naive that we're shocked by a tech company saying they won't do, you know, bad thing X and then they do it, right?
“And I think this is a bit of a different case”
where, you know, the articles of incorporation under which OpenAI was set up as a nonprofit make very clear that they have a legally binding fiduciary duty to do certain things and what we chart in the piece is this kind of years-long process
of the company kind of converting itself out from under those restrictions, such that now they're one of the biggest for-profit companies in the world, that is not something you can do without angering a lot of people,
including people who you recruited and who took pay cuts to work at your company. - I'd also point out on the sort of fraud question, and I want to be clear, you know, we're wearing our lawyer hats here
and I'm not going to talk about fraud as a matter of law. But when you talk about in a colloquial sense, things that feel materially deceptive and may have real legal ramifications of one kind or another,
“it's really important to note the way in which”
this company was behaving amidst this backdrop that Andrew just described, right?
Of you have promises that don't always line up,
which with what's being delivered, you have that on a scale that critics in this piece are saying is beyond even the baseline of Silicon Valley being founded on that kind of a hype model, you have material assurances to board members
about real concrete safety testing requirements that are just turning out to not be true. And then you have a superstructure of handling of this that also raises question marks according to many people around its stakeholders
and outside legal experts that we spoke to. So one great example of this that hasn't been previously reported is that a part of the mechanics of Sam Altman coming back after board members fired him several years ago for this alleged pattern that we document of a parent,
deception and manipulation, is that he got rid of some of the board members that had moved against him. But as a condition of their exit, they demanded an outside review. And he was reluctant, we report that people at the time
recall all been saying, I don't want any review, any review just by its existence regardless of its conclusions could make me look guilty and understandable concern.
But ultimately after a lot of tense negotiation,
he acceded to that demand. And the way that it went down was that he got rid of two keyboard members who had moved to fire him. He in close consultation with their major outside financial backer, which was Microsoft,
basically selected two new board members. Larry Summers, which is the name we can talk about. But at the time, he was designed to confer legitimacy because he had been a cabinet member, and he had been the president of Harvard, he's since stepped down because of his role
in the Epstein files. But he was a big cheese who was brought in alongside Brett Taylor who had been at Facebook. And was also viewed as a steady hand. But we uncover how during a previous round of Brett being
considered for that board, actually, there had been people who were concerned that he was too close to all men to be independent. So you have these two people that all men select, pretty closely, he's in all the conversations
basically proposing these names.
And then you have a process where they go to a very Tony legit law firm, Wilmer Hale, to do an outside investigation.
The interesting thing is Wilmer's legitimacy
for these kinds of investigations,
it comes from a lot of transparency.
“And many of the prominent cases where they've done it,”
and Ron, world com. These are cases where they publicly released voluminous details of investigations. In this case, we report that there were a ton of people around this investigation.
I think we cite six people close to it, who really felt it was designed to limit transparency and as one put it to reach a foregone conclusion, that they went in the perception of these critics. And of course, we have the lawyers involved
defending this. They say, no, it was comprehensive. It was independent. But there were a lot of people around this who felt that they were not interested in the core questions
of integrity that were underlying the firing. It seems they did deal with them because people wouldn't stop raising them. I want to be careful about this. It's not that they totally scoped it away from that.
But ultimately, it really seems like
“what outside investors who wanted Sam back or looking for”
was did he sexually assault someone, did he imbezzled much more bright line things. And with respect to the actual findings, we actually have someone involved saying, well, you know, on the honesty and integrity question,
the review I'm only lightly paraphrasing here, did not find that Altman was a George Washington's cherry tree of integrity. So you have that underlying content of a review. And then the really extraordinary thing in my view,
which is yes private companies sometimes keep these things out of writing. It's worth noting that there are plenty of legal analysts who say that's a red flag. It's a way to prevent liability and avoid scrutiny.
When you get to a case like this where it's a high profile scandal that engulfed Silicon Valley and where all of the stakeholders around it senior executives at this company forget the public expected to see findings,
they on the advice of those two new board members that have been selected in close consultation with Sam and their personal attorneys, you know, the Wilmer Lawyers obviously were also in on this decision, but out this particular private advice fed in,
kept it out of writing. It was only oral briefings. And look, the piece is very sober about appraising all the arguments some of the lawyers make that this is fine and appropriate.
We actually have one of those new board members who oversight, Taylor saying on the record, there was no need for a written report. So they're acknowledging this now. There are a lot of people around this who understand,
to please say that defeats the entire purpose it was supposed to achieve. They released only 800 words of essentially a press release on the website saying there had been a breakdown in trust.
Nobody knew what the hell had happened. So I hope this can bridge some of that divide of misunderstanding and this brings us full circle to your question, under Delaware corporate law, there is like section to 20 obligations for any nerds
that are watching and know the mechanics of this. If this company IPOs,
“I think one reason why you can imagine the pushback”
and comment seeking and fact checking process on this point was very intense with the parties involved is there is real concern. We have board members telling us, well, if there were these deficiencies,
there may need to be another review. You have a law firm that does gigantic business on the basis of the legitimacy of these kinds of reports. And you have a legal system that entitles future shareholders to go back as they often do in these kinds of scandals
where things are insufficiently documented and demand the internal underlying records. And judges have often ruled in favor of those demands. So that is something of a potential powder keg, I think in the view of a lot of people around it.
- I mean, I read this and that was exactly what I was thinking. I was like, this had to go through so many lawyers
to be able to basically put this stuff into writing
because for exactly the reason that you state, I mean, what Andrew talked about is exactly correct. Some of this is Silicon Valley business is usual. Like lying about your ping pong score, like who doesn't do that, right?
But some of this is just, you know, a salesmanship. No one's saying that that is the stuff that people are like, it is the actual rubber meets road of putting these things into a corporate structure finding board members with fiduciary duties to those ideas.
And then kind of backtracking and like reverse engineering, the entire nature of your company and everything that investors kind of had bought in on, well, simultaneously actually deceiving your board. I mean, I was kind of baffled and thought that,
I mean, it was again, this is a law fair podcast. So we have the right audience for it.
It was a nerdy kind of point.
But like the idea that Wilmer did not release these details,
the idea that they gave these oral briefings, like how bizarre, like why give briefings at all? And we really sort of stumbled into the full extent of this. Like we had Sam on the record saying, oh, you know, I don't know about it being just a oral briefing
to those two new board members that I helped pick, you know,
“it was, I believe very strongly it was given”
to all of the subsequent board members who joined in the aftermath of this paraphrasing. But, you know, that's, you can find this exact quote in the piece. And then, you know, we looked into it and that's simply not true.
And we have, you know, other people around this saying, that is a lie.
And really, it was just this extremely limited,
deliberately undocumented briefing process over the course of it. And, you know, Wilmer is also very careful about saying that their findings were not exonerative, you know, that that their role was just, all they'll say is as summarized in that press release,
which is based on whatever they found. Those two board members, Sam, helped pick, decided it was appropriate to have him continue. So it's all very cagey, it's very deliberately undocumented. And as you say, it really in the view of many around this defeats
the purpose of bestowing the legitimacy that you would get from a proper investigation, I think it's a great parable about the need for transparency in these kinds of corporate situations, especially with these vast existential and safety stakes for all of us.
Right, this isn't even normal business. This was a 501c3. You know, it actually is a disservice to Sam Alpman that the investigation was handled this way.
“Because that's how you wind up with a 16,000-word New Yorker”
investigative piece years later. And there's a lot of legal authorities that talk about this very question in this very way that if you keep things out of writing for a short-term game, you know, and ostensibly it's to protect privilege and whatever, you know, to avoid as much liability as possible.
But it has, has these longer-term costs that people can speculate that there are much worse things that they found. So I, again, hope that we've done something to a restore a little bit of the transparency there about what the underlying complaints were. And be send out this message loud and clear that there are these costs that companies
and law firms doing this kind of work should reckon with, I think, in a more acute way. Andrew, do you have anything to add? You do have some instances of, like, for example,
Greg Brockman writing in a digital diary, like, I would like to make a billion
dollars or something. So you do have a few people sort of writing down the thing that they-- Yeah, and also Greg Brockman in a diary writing, you know, essentially, like, wondering whether it's a lie that they're saying that they're in on-profit and then turning around and maybe becoming a B-corp,
which they were planning to do very early on. It's truly remarkable. And the fact that all of these guys keep such voluminous journals, this piece is so full of journaling, I know. It's insane. I was like, who is telling these people to keep a diary?
I know. Do they think that they're, like, kind of, someone is going to, like, be finding their notes, like, our comedies someday or something and, like, wrapping them into, like, that they're going to be part of this genius historical record?
I mean, as you know, they absolutely do think that.
“And that's why, you know, this is not-- I mean, obviously, this is a piece”
about a person, you know, sort of being used as a lens onto these structural things. But I don't think anyone should walk away from this piece thinking, like, oh, Sam Altman shouldn't be the AGI dictator, like, Elon should or Demis should or Greg Brockman should, right? Like, obviously, I think the structural takeaway here is closer to, like,
who is vesting these dudes with this amount of power? How could that possibly end well? And yeah, as you say, like, a lot of people, I think, sometimes take the, the easy way out, sort of, mentally from this people who are dismayed by the accelerating power of these technologies, and they sort of say,
either it's a parlor trick or it's stochastic parrots or, like, this is all hype and an attempt at regulatory capture. And I think among many things, what that misses is, you can think that sort of sitting at home and watching from the sidelines. But the people involved in this do not think that.
They think they are playing, like, a ring of sorrow on level competition, and their actions back that up. Yeah, there is a huge piece of this, and we'll kind of get to this, kind of, in a little bit, which is just that it's happening. Like, it doesn't matter whether you think it's real or not.
It is like, there is enough money and enough buy-in, that this is just becoming the reality that we're swimming in. And it doesn't matter if they are, like, stochastic parrots. If there's a trillion dollars invested in it,
Then we are all going to be run by stochastic parrots.
And if they autonomous military murder drone is run by a stochastic parrot, like, that's still a problem. Right. And so it does hit me. And, Andrew, I know that you have spent a lot of time kind of, you and I both have spent a lot of time in the speech platforms,
kind of looking at social media. And it just feels to a large extent exactly the conversation we were having in, like, late teens, early 20s, which was, like, and this feels like AI is just speed running these issues.
Like, all of a sudden, we're going to have these incredible, incredible mega-corps.
“These transnational companies that transcend state power, right?”
Running these things, and there's no one to curb that. I think that we should just, like, kind of skip to the country plan, which is the idea that, I guess, Page Hedley, who has come in was a former non-profit public interest lawyer, and she was going to be open AI's policy in ethics advisor, and she made a talk about open AI,
could basically avert a catastrophic arms race by building a coalition of AI labs that would eventually coordinate with an international body akin to NATO, to ensure that the technology was, you know, deployed safely. And you'd have in, like, your piece of the Brockman, Greg Brockman, just, like, could not grock this.
He could not understand why someone would engage in this type of, like, new governance, in this type of multi-state colderism, in this type of, like,
“do the right thing is, um, if you was, like, where is the money?”
Like, how do we make money in that? And then begins this thing called, like, that you call the country plan, which is essentially, like, why not pit selling this technology between China and Russia? Why not go around? Yeah. Yeah. What could possibly go wrong?
And then you, obviously, the rest of the piece is just, like, incredibly detailed, conversation about how Sam's already in the process of doing that to, like, of selling the technology at which the Gulf States. And so, I don't know. I guess we'll kind of skip to the big question, which is, like, is there any way to stop this if you have this kind of coalition of the willing
between these giant, moneyed countries and the role of laws falling around our feet?
“And, like, at the very least, it feels, I, like, you know, you kind of, like,”
are hoping that the IPO gets canceled or hoping, but, like, I have to say, I've finished your piece, and I was like, in another world, in another time, this would have canceled the IPO. And I'm like, does it even, is this going to matter? This is, for me, the bigger point, you know, Sam Altman is one figure. He's a consequential one, but he's especially consequential
as a window through which to view a wider dynamic that critics in the piece describe as a race to the bottom. And, and that's a kind of confluence right of what you described of legislative and regulatory guardrails on private industry and particularly big tech falling away more widely, and anti-regulation bent in American politics under the current regime, and a backdrop of money in politics that increasingly allows Silicon Valley to really have
its hands on all of the levers of policy making and power. If you're running for office right now in the United States, you are contending with a flood of PAC money that is specifically focused on
quashing AI regulation. Greg Brockman, Altman's second in command, has contributed to one of
these major Trump-friendly anti-regulation PACs. So, there is just a massive amount of machinery that is pushing away from safeguarding us from any recklessness, any lack of trustworthiness of the type that we write about in the piece, and the tragedy of it is these companies and individuals are the ones best positioned to clock what the risks are, and they keep telling us the risks are grave, right? You know, autonomous weapons in war zones, you know, the potential for mass surveillance,
the devastating misinformation potential of this technology in electoral context, all of the kind of more individual mental health stakes where you have all of these wrongful deaths, suits against open AI, alleging the chatGPT contributed to suicides and a murder potentially, the list goes on and on, and yet there is just very little framework to deal with any of it, and the inflection point of the structures these companies themselves set up, right? Open AI being
this safety-focused nonprofit having a board that could fire its CEO if they thought he was lying seriously about material things, that all fell away when Sam Alpin came back. Sam Alpin
is not the only one participating in these broader dynamics, but very often open AI has been a first
mover, and I think we both viewed the story as one that just tells the bigger story as well,
You know, we can talk about some of the kind of shortfalls in policy-making s...
of regulation and legislation that get at a whole range of the issues that we've already raised.
Some of them are being kind of trialed elsewhere in the world. Some of them have come up in proposed state regulation, though that two faces an uphill battle. We talk about that a little bit in the piece, how there's also a lot of machinery to get rid of any state efforts.
“But I think I was eager to have this conversation because that is the part of it that I feel is”
most gravely urgent, that we all need to marshal whatever resources we have as Americans to get at these broader shortfalls of accountability for companies that really do as one
person said of the Sam question in this piece, you know, people with their finger on the button
in a very consequential way, and you know, forget the more acute example of Sam and his alleged issues with honesty. You know, you look at the in the in their own eyes anyway, good guys at endthropic, right? They've got breaches right left and center. They're also weakening their safety commitments. So I think the point is we are placing all trust in unaccountable private sector actors. And I fear that we're not going to wake up to the consequences until something really
devastating happens. Yeah, there are also major public sector actors. They just tend to be like
shit to noon on his yacht in the UAE. So I should say like two things on this just to piggyback
on what you're saying around. And like, you know, it's often sort of painted as naive or kind of, you know, like why are these sci-fi nerds even talking about all of these risks and like for one thing, these mega dystopian sci-fi risks are being discussed because people like Sam Altman told us to be afraid of them, right? So this is not something that
“Ron and I invented one morning to scare readers about and we're very kind of, I think, sober about it”
in the peace. The people who founded these companies told us to be afraid of this and more over, like again, there are people who think that's just a move to try to attempt regulatory capture or to attempt to, you know, gain attention for the next round of investment and there may well be some of that. But, you know, you bring up people like pageheadly, these are not all competitors who work for anthropic or who work for Google or who are trying to scope the next round of
investment funding. I don't know, maybe page will, you know, have an IPO before the end of the year. And I'll be, you know, proven not cynical enough. But like, there really were many, many people who we talked to in the hundreds, something people we spoke to who were not, you know, we're very careful to not be just laundering competitor gossip through our peace. Like, there are a lot of people who really are concerned whistleblowers now. Their fears may be proved right or wrong. And
unfortunately, we're going to live to see whether they're right or wrong. But it's not just all competitor hype and, you know, the normal stuff of capitalism. In the end, we're going to show you our interactive exhibition with the elite tour with audio guide and a classic and the next part of the world of rock caption. The rock caption is only a segmented segment. The rock caption is a part of the world of rock caption. The rock caption is a part of the world of rock caption.
Yeah, so I want to say that that's another thing that I absolutely value about this piece is that it is super nuts. It is you talk to people on the factory floor of these companies and you talk to people. I have a list actually of just like all of the different ranges of perspectives that you
“gotten this piece. One of the things that I think would have been a disservice and made this piece”
less valuable is if you had just focused on the game of thrones, aspects of like these kind of companies, the seasweet, the people who are trying to be the narcissist and to like run the world
Who are writing their diaries so that one day people will study them and talk...
But one of my really big takeaways for my time in Silicon Valley and my time inside companies
is that the majority of people outside the top 100 people in a company are there because they think that their job matters and that they're trying to do the right thing. You talk about how in late 2022 for computer scientists published a paper motivated in part by concerns about quote unquote deceptive alignment in which sufficiently advanced models might pretend to behave well during testing and then once deployed pursue their own goals that this moment actually compels
all into create like a super alignment project where he like reports to care about safety and like going to set up this huge team and recruits on it and then of course it just falls away. It just goes on and the announcement actually says specifically like we need to create this to prevent the extinction of humanity. Right? Yeah. So again to Andrew's point like it's not us introducing
“those stakes. I think that's a great example. There are many others in the piece. I was struck by the”
thing you just said about the gap between the kind of seasweet leadership and they're sort of writing down their own history for the books and then you know the working-level people where I share your impression. You know again and again Andrew and I were encountering people who really came to this company this non-profit at the time that they joined for the early ones because they believed that there was a need for the thing that Sam Altman was pitching which was something
that was not just governed by profit and growth and that would stand really defiantly against that that had a corporate structure that was uniquely designed to prioritize safety and very often those
people at a working level are much less perred and you know that's one of the most important
equities in this reporting to carry the concerns of these researchers which continue to this day and you know some of the lower profile safety-minded people who had concerns they actually you know raised them or attempted to raise them within the corporate structures informal ways we have a lot of emails complaining to people from from safety leads saying you know the company is going off the rails on its mission you know essentially sort of whistleblower material
that we surface and we even talk about there's there's a case where in terms of the corporate transformations into a more for-profit mode there's a moment where there's a board vote on that conversion and there's a board member who appears to vote against it you know according to the materials and sources that we have and argued that the way this was being done was not appropriate that the nonprofit was being severely undervalued and said I can't do this in good faith
and then it's memory hold well right an attorney's point blank if if we record this descent it might be a flag to investigate further the legitimacy of this new structure and the vote then gets recorded as an extension apparently without this board members consent now there's a disputed factual record here we carry open as pushback they say several employees recall it being a deliberative extension but I found that to be a fascinating case you know we don't
as Andrew rightly pointed out come to legal conclusions here you know I'm a lawyer but I'm not
“your lawyer open AI and I'm not going to come to conclusions on it but I think it's worth raising”
because it has meaningful legal ramifications whatever the conclusion might be it is certainly something that raises questions about potential of falsification of business record charges under state law you know that would be a misdemeanor in most jurisdictions but more practically the lawyer for the for the board made exactly the right point which is that vote was consequential it authorized a corporate restructuring worth hundreds of billions of dollars and a tainted vote
could ultimately create grounds to challenge the legitimacy of that conversion which is really
relevant to pending non-profit litigation the musk suit and to IPO due diligence so all of these cases where you know to use your frame kind of very often the little guy the less profiled people sound alarms and one way or another and then to use your term again sometimes it does seem like there's some memory holding it it just I don't think the math on it is smart even from a
“pragmatic standpoint for the people who are trying to make the complaints go away I think it”
creates more problems and I and I really do want to talk before we part ways about the failures of structures around this company and the regulatory and legislative failures that just they don't create any recourse like to use some of these examples there should be a more robust federal AI
Whistleblower protection legislation you know it's crazy at this point when y...
themselves saying this is the most dangerous technology ever to not have the kind of regulatory
a regiment that we have for you know pharmaceuticals or for food even you know and and for whistleblower protection associated with that there should be something modeled on sarbans oxley for AI whistleblowers you know people were very fearful to talk to us for this story and people who are sounding alarms about this technology are doing a public service and they deserve to be protected you know
“that's I think just just one of the many regulatory steps that would make or or legislated you”
know you could you could come out in multiple ways that could really make a difference yeah I mean just just backing on to that like Kate you were talking about the ways in which this is continuous or discontinuous with the trust and safety debates around social media and I think it's interesting you know there's this one moment in the piece where I think to some people it may be seemed like we were doing a gotcha thing where we were saying okay let's talk to existential safety
researchers at the company and a representative from the company is like what's that that's not a thing
and first of all I think just to be clear like we absolutely did not do that as a gotcha thing like
we we went in saying please let us talk to the research teams that you have working on the original ostensible res on detra of this organization and and the reason that we talked about existential safety versus you know trust and safety is that the word does mean different things and there are legitimately different people who work on different aspects of safety some of it is anti-doxing some of it is privacy stuff some of it is making sure the models don't you know
turn into mecha Hitler and start spewing racist stuff at you and some of it is this stuff that is in the more hypothetical sci-fi realm of making sure they're aligned and don't kill us all that to your point Kate the the deceptive alignment research we kind of just very quickly say in the piece this all sounds crazy except that under certain experimental conditions it's already happening and like again best fact checkers in the business like no one disputes that as a factual claim
these things are starting to happen under certain experimental conditions now we don't know what that means we don't know how fast this stuff is going to accelerate but as Ron and his saying the fact that the people closest to this technology are trying to make everybody wake up and start to panic a little bit and we have a political system that seems incapable of that is a a deep structural problem that I mean we just document in the piece like
a California bill comes up and as we report in the piece and investor Ron Conway who's close to open AI calls up Nancy Pelosi calls up Gavin Newsom the bill gets killed right so this is the current system it's not shocking to anyone that that's the system we live under but it is unprecedented
“the scale of it. No and I think that we should switch to kind of and talking about this because I”
do think that we should have a conversation about what next does the law have any type of you know recourse can we push back on these things I mean after reading your piece I am incredibly bullish on and I was before but like on kind of regulatory potential legislative potential doing something to curb some of this and to make it so these companies are accountable to anyone besides they're kind of meglamaniaccios and like I just don't understand how that's possible in this current
government it seems like a perfect storm of the closeness and the coziness between kind of this current administration and these companies well and in fact when there was almost a moratorium on all state regulation for ten years this isn't even in the piece because that we had so much to get to but it seems like Steve Vanenn and Mike Davis and a lot of mega-ish people were standing
“in the way of that so it's a very strange bedfellos kind of situation yeah I mean I think that that's”
completely right and then like you know I watched in Europe just from like a far like people try to kind of create the AI act and Brussels and everything and create real kind of strictures I mean the other thing is that the terminology that is being used and this is so vague and so untethered to like meaning technical meaning or you know or even non-technical meaning to be totally frank guardrails if I had a nickel for every time some assholes like guardrails to me like we should just have guardrails
like okay well that's really nice like on what for what they want yes exactly well let's talk about what guardrails could mean by the way because the the UK had an AI safety institute long before
we've tried anything of this type the US tried to do this under Biden to basically have like
Some form of mandatory pre-deployment safety testing that's actually meaningf...
all for it at the time but that right and to be part of the pattern with Sam as Andrew's pointing out is that we document how he was the regulators regulator sky and he proposed a federal agency to regulate AI and he talked about wanting these kinds of testing requirements and you know then simultaneously was very often working against these very kinds of provision in regulation to your point about what is actually possible you know even thinkable in the current environment I mean the flood of
money from AI into politics is enabled by this post citizens united landscape where you can have
of Greg Brockman putting $50 million into the leading the future pack and he and his wife putting
“$25 million into maga ink and that's just free speech now and I think we are learning that the”
world we're living in where this is possible and where the flood of money into politics really makes it unthinkable to meaningfully push back that that has real deleterious consequences for all of us that that is endangering all of us so that to me is why it's worth us putting a year and a half of our lives into pieces like this I had just come off of when Andrew and I started this year and a half ago a body of reporting about Elon Musk which published at a time when people were
not raising the red flags that they now openly raised about him and the reason there was also not that I was you know so fascinated by this more man it was that Elon Musk is a really leading example sadly of the ways in which these tycoons in silicon valley have developed super governmental authority and that story was about someone who had you know real geopolitical ramifications
where he was the soul vendor in a lot of key areas to the defense sector and where basically you know
senior officials had to scurry around doing his bidding and he could add a whim in that case turn off communication access through starlink in the battlefield in Ukraine on the front lines and people's lives were in the balance and and so that's the bigger story and it's what we see in AI2. I mean one way to kind of connect these threads right is like exactly like you're saying about this kind of super governmental power when you go back to these early days of people sitting in a
conference room with a whiteboard saying what if we were to auction off a GI to China and Russia or you know what if we were to you know grasp the ring of sauron and become an a GI dictator right or what if this you know wakes up one day and kills us all in some sense like the best case scenario is that was all hype and bullshit and it was just trying to get investors attention anything other than that is kind of the worst case scenario right what if there's some
“even slim chance that something like that is true and so I think to ruin its point it's like”
yeah it would be one thing if it was just someone like Elon or Sam Altman kind of having these sort of like self-aggrandizing conversations about one day I'm going to own all these satellites that will be useful in war zones or one day I will build a technology that the pentagon can use to maybe conduct master valence or maybe one day I'll build a technology that can be interwoven into markets and transportation infrastructure and hospitals and but but like the scarier thing is
what if some portion of that comes to pass and obviously that's the timeline we're currently living in yeah so I'm going to propose one thing as long as we're on the topic of kind of what we can possibly do and who will save us from this which is I mean it's a little idealistic but the courts and I you know I would point exactly to kind of they are less viable or at least were
“then you know the other kind of branches of government and these other types of structures I think”
one of the things that really struck me and you know you don't you just said before like the democratic has its its problems as well like it's not as if it's like a perfect company but you
do have any end the piece with and it must have kind of been one of the provocations to finally
get this out the door and kind of like a perfect quota to end on is the supply chain risk designation that that the administration let like came down and withinthropic I have to say that reading everything that you wrote I am just like how on earth is anthropic the problem like I mean like how could they possibly pivot to open AI is a substitute you go through everything from like altman failing to get clearance right to all of his dealings with the UAE with Saudi Arabia you know these
back and forths and promises of safety and then not making good on them and then you have like these kinds of conversations about you know geopolitical maneuverings launching an unchecked safety open AI products in India you know just various types of things are you following kind of closely the anthropic case in the northern district of California you following closely the case in like
That we just got kind of anthropic lost at least initially in the DC circuit ...
thinking do you think that there is hope with the courts on any of this well you know it was striking to see after the pentagon standoff that we narrate in the piece that the initial ruling from a
California judge is the judge Rita Lynn who wrote actually really powerful decision and talked
about the designation being classic illegal first amendment retaliation and called it Orwellian and anthropic was restored to federal procurement schedules the latest ruling you know I would not overstate its significance just yet what it does is create kind of a mess it's a procedural ruling I mean basically the argument the circuit came up with was it's primarily a financial standing that anthropic has and that the equitable balance is in favor of the government's interests
on this so it just the DC circuit didn't say the blacklisting was legal they just said anthropic didn't meet the very high bar for emergency relief while the case plays out so the merits are still
“live there's an evidentiary hearing on the 19th it'll be interesting to see I think you're exactly”
right in the big picture question the courts can play a pivotal role at this time when for all the reasons we just discussed the legislative branch is so disempowered and you know the courts also having some ways been denuded of a lot of the legal thinkers that would be into holding private sector actors accountable there are real problems with the judiciary right now as a source of accountability for a Silicon Valley run a muck but I do think you're right to identify
that as one of the core areas of hope because you do see rulings like the one I just described where someone said you know know this is irregular and wrong and violates the first amendment yeah and I think just to add to that like all the conversations we're having about guardrails and regulation right I I think all of us would not want to give short trip to how complex all of these discussions are and how there are good faith arguments I mean you you you you you heard a lot of arguments around
when this supply chain risk designation was happening like okay for all you pro regulation people is this the government you want regulating this technology in this way like so there are good faith arguments that need to be worked out by people who are approaching these things from first principles the problem is when it's all drowned out by industry fueled you know muddled sort of propagandistic talking points you can't have that good faith conversation yeah well and I guess we can end
on the fact that the the muddledness this is an incredible piece of journalism it was in tremendous
amount of work and just full of you know just such information that the public needs frankly before you have a public company that is valued at a trillion dollars which is more than the GDP of many countries in Europe for example and it's just it's just it's amazing that like you know that all of this was it's available from like everything from the Wilma report to you know knowledge about kind of some of how these alignment goals have been abandoned and the safety goals have
been abandoned to like how this is getting released in other countries to the country plan kind of
“the national knatsack and geopolitical maneuverings I think that this is going to be a piece that”
has incredible effect for a very long time so and I just want to say also in terms of the impact
I really appreciate the policy discussion which very few people take the time and space for and I hope anyone listening to this I mean this is a specialty audience of people who are smart enough to care but they all have friends in the wider public to where they can explain why this matters so I hope people start a meaningful conversation about this because the stakes really are that fast and I hope it extends to how we engage with our representatives you know you talked about the hope
that the judiciary will play a role in creating to use the term you love so much guardrails but also I haven't given up hope about the legislative branch you know you see politicians all ready across different parts of the government clocking that it can be advantageous to play a role in creating accountability florida today just announced their AG's office just announced new
“investigation into open AI around some of this stuff we've talked about today so I think that”
people are looking at the numbers in Washington as well and there was polling last month from I think NBC showing that a majority of the public now views the costs of AI and the risks as outweighing the benefits so I do think we're all in this moment where we're kind of mesmerized by our screens and in the thrall of these companies but if Americans join arms and really tell their representatives we want you to do your job and make sure that we're kept safe from this
Technology that could eliminate our jobs and threaten our safety and all the ...
about I do think the simple math of Washington can still lead to meaningful oversight and results
“if we really really press for that we have the midterms coming up I mean like when we've talked”
about how this is going to affect the IPOs but like I absolutely think this piece could play
and an AI existential risk and like fear basic consumer and citizen fear of some of these technologies
could absolutely play a role in and how people vote before you vote find out whether your representative is getting AI money and you know do scrutinize their policy positions related to that yeah well guys thank you both for coming on Andrew Ronin it was wonderful to have you thank you Kate it was awesome thanks Kate great to be here
“the law fair podcast is produced by the law fair institute if you want to support the show”
and listen at free you can be him a law fair material supporter at law fair media dot org slash support supporters also get access to special events and other bonus content we don't share anywhere else if you enjoyed the podcast please rate and review us wherever you listen it really does help
“and be sure to check out our other shows scaling laws rational security allies the aftermath and”
escalation our latest law fair presents podcast series about the war in Ukraine you can also find all of our written work at law fair media dot org the podcast is edited by Jim Patia with audio engineering by Kara Shillan of goat rodeo our theme song is from alibi music
and is always thanks for listening
nah no flene for such a end, visit the road captioner life world in freebox with your own mail to urmach or the channel type of a name on the book of all the years and take our interactive exhibition by the elite tour with audio guide and a classic and the next parvillian the whole world from road caption the road captioner life world only a set of things and ferned

