Hard Fork
Hard Fork

The Future of Addictive Design + Going Deep at DeepMind + HatGPT

1d ago1:09:2613,198 words
0:000:00

Last week, two separate juries held social media companies liable for harming young users. We unpack what these landmark decisions mean — not only for the future of social platforms like Meta and YouT...

Transcript

EN

Framework is a website builder that turns.

Whether you want to launch a new site, test a few landing pages, or migrate your full.com.

Framework has programs for startups, scaleups, and large enterprises to make going from idea to live site as easy and fast as possible. Learn how you can get more out of your.com from a Framework specialist or get started building for free today. At framework.com/hardfork for 30% off a Framework Pro annual plan. Rules and restrictions may apply. Now, here was a really interesting situation Kevin. Did you see this robot taxi outage that left passengers

stranded on highways in China? No. So this happened in Wuhan recently. I've heard of that place before. Did they do anything else? I'm not clear to me. I'm not really familiar with their game. But apparently there was some sort of technical glitch that caused a number of robot taxis owned by the Chinese tech giant by due to freeze and trapping some passengers in their vehicles for more than an hour. And I just thought, my gosh, what a nightmare. Just

imagine you're in your robot taxi on the way to a wet market in Wuhan. You have an appointment with a pangolin who's going to tap on you to see if they can transmit anything to you. And then your robot taxi gets vulnerable. It's a nightmare. It's an absolute nightmare. I think that

robot taxi outage is definitely the worst thing that's ever come out of Wuhan. Yeah. When it comes

to these by-do robot taxis, my advice? By-don't. No boy. No, that was the worst thing to come out of. I'm Kevin, who's the tech college with the New York Times. I'm Casey Newn from platform business hard for this week. Social media companies keep losing and court. How will that reshape the internet? Then the infinity machine author Sebastian Malibu joins us to discuss his new book on Google D-mine and Dennis Asobis's Quest to build super intelligence. Finally,

it's been a while. Let's catch up with some hat GPT. I missed you. Well, Kevin, while we were away, I was riveted by what was going on in the court rooms in Los Angeles and New Mexico related to social media. Yeah, it has been a big week for these social media product liability trials that have been going on for some months. And we actually got some verdicts. We did. And in both cases, social media lost in L.A., a jury found that meta and YouTube

had been negligent in the way that they designed features that they said were harmful to this

plaintiff. They have to pay six million dollars combined to this plaintiff. And then in New Mexico,

the jury said we believe that meta has violated the state's unfair practices act and has misled consumers about the safety of its products and has endangered children. In that case, they are ordering meta to pay three hundred and seventy five million dollars. Yeah, so we've talked a little bit about this series of cases against the social media companies. You know, social media companies,

they get sued all the time for all manner of different things. I think what caught R.I. and

specifically, your I was sort of legal theory underlying these cases. So talk a little bit about that and what makes this case different from other cases that have been brought against the social media companies. Yeah, so I would say there are kind of two big reasons why these cases are super important. One is that these are what are called bell weather cases. Kevin, you ever heard about bell weather case? These are the cases that set precedent for other cases. Yeah. Exactly. These

are the cases that if successful are going to open the floodgates for lots of other people to sue

under the same theory. The second big reason that these cases are really important is that they appear

to have opened up a crack in section 230 of our communications decency act here, which for 30 years has been essentially the foundation that the entire internet rests on. It's also a dentist's favorite statute. Yes, that's section 230, if the joke wasn't landing for you. So yes, this is a super important. Super good. You got that. Yeah. No, the release had part was I was planning my own section 230 joke. Oh, because I just went to the dentist yesterday and I didn't have any cavities. So

tooth and not hoodie. Moving on. So section 230, Kevin, you may remember, is the law that says that in most cases these platforms cannot be held liable for what their users post. Yes.

So if I went on Facebook and I defamed you, which is something I think about doing every day,

you could sue me, but you couldn't sue Facebook. This is what's been blocking my lawsuits against Facebook over your posts for years. That's right. And back in the day, like 30 years ago, this was actually really important, because there were these small internet forums that were starting up. Some of them got to be bigger size. You know, copies serve AOL. And inevitably, somebody would be mean to another user and they would say, I'm not just sueing you. I'm sueing

copies serve. I'm sueing AOL. I'm putting the whole system on trial. And a couple of lawmakers got

Together and they said, this is going to destroy the entire internet.

be forums and not have these platforms being held liable for all these things. But fast forward to

today, and Kevin, would you agree that maybe there are some harms that are taking place on the

internet that do not consistently, people defaming one another on copies serve? Yes. Yeah. And so this is essentially the question that gets asked in this case, right? People say, hey, it seems like we're a pretty long way away from 1996. I'm opening up TikTok. I'm opening up Snapchat. And I'm seeing infinite scrolling feeds. I'm seeing auto playing videos. I'm a teenager, but I'm getting barrage by push notifications in the middle of the night. And that's the same nothing of the

recommendation algorithms that might be driving me toward content related to eating disorders or other things that are going to make me sad and upset. And so some of these people got together with their attorneys. And they say, this actually feels different from the thing that section 230 was designed to protect, right? This is not about, oh, I got harmed by this particular piece of content. This is about the design of the whole platform, the design feels effective. And the really

crazy thing about these cases, Kevin, is that Jerry's agreed with these plaintiffs for the first

time. And they said, we like this theory. We think these products are defective. Right. So this is kind of a side door that these lawyers have found around litigating on section 230, which they have successfully now shown that at least in these cases can convince a jury that it is not about what's on the social network content. Why is it's about the actual sort of mechanics and plumbing of the social network that are harmful to people? That's right. And we should say that we do

expect some appeals here and until those are sort of fully exhausted, I can't tell you for certain this is the moment that the internet changed forever. But there's been a lot of commentary over the last week and about what it would mean if these cases were upheld because it seems like juries are just going to be really, really sympathetic to these claims. So before we get into the implications, like can I just ask a couple more questions about these actual specific cases? Please. So what are the

actual platform mechanics that are being litigated over here? Yes. So in the LA case, among the design features that were at issue were the so-called beauty filters that can make you, you know, look quote unquote more beautiful if you use them, infinite scroll auto play video. These barrages of push notifications that platform sense and also I would argue more problematicly the recommendation algorithms that power the platform. And then in the new Mexico case, that was much more about kind of

a child safety. So they were arguing that Instagram in particular had become this playground for

predators. It was very critical of the fact that meta offers an end-to-end encrypted messaging.

And the basic idea was meta falsely advertised that these platforms were safe when in reality children are being harmed there all the time. So from what I understand, it was like the case was basically taken out of the playbook for going against Big Tobacco or another sort of industry that makes harmful products. You say this is harmful and not only is it harmful, but the company that was making it new that it was harmful and either made it more harmful or just released it as planned.

Anyway, I did see some sort of exhibits that had been shown off at the LA trial. I believe where

some employees at meta were sort of talking on their internal forums about how this stuff is so addictive for kids. That seems bad. And I imagine that was persuasive with the jury. But are there other instances where the platforms are being sort of taken to court over things that they sort of newer harming people and that they either dialed up the harm in an attempt to spike engagement or sort of knowingly release these things to the public? Yeah, so some of this research has come

up in other litigation over the years. But I think this has been probably the most damaging

case that we have seen. You know, the first time I remember reading a lot of these internal studies

was in the wake of the Francis Howgan revelations a few years back, right? Like Francis Howgan walks out the door of meta and takes a bunch of this internal research with her winds of sharing with the Wall Street Journal and then eventually a bunch of other reporters including me. The reason that the research mattered a lot here, though, Kevin, was again the plaintiffs are

now building this very specific case, which is you're building a defective product, right?

Before the past couple of years, we weren't really using this language. We weren't really adopting this sort of public health framing of a way to discuss the harms of social media. Before then, it was just kind of this more nebulous like, hmm, like they're studying the effect of Instagram on teen girls and it seems like some of these girls are having really bad outcomes, but we didn't really have the framing. Well, now we have the framing and we're just saying,

like, hey, you looked into it. You found that some subset of your users are having really bad experiences and you did not change the features until that mattered. Well, let's talk about the changes. So what, what would you expect a platform like Instagram or Facebook or YouTube to change

In the wake of these jury verdicts or are they just going to wait till it all...

peel? I honestly don't know the answer to that question. And I think it's a really interesting thing to watch. The question that you just asked is really, really controversial, actually,

because much of what these platforms do is just protected under the first amendment. And then

section 230 also protects a lot of speech, right? And the big debate that's like raging in the internet policy community right now is can you separate design from content? I want to get your

thoughts about this. Is it like the container or is it the stuff in the container that is dangerous?

Yeah. And there are some people who are saying that no, you cannot make that distinction, and that effectively all design is content, right? Like if I want to send you a push notification, that is my right under the first amendment. And you cannot tell me that I cannot do that. You cannot tell me that there is a certain limit that I have to place on the depth that you can scroll and Instagram like that is protected. But for what it's worth, juries are taking the opposite

view. They're saying that there are at least some things which seem like are just clear mechanical

design features and I happen to agree with them. So let's talk about this because I think this is maybe a place for you and I disagree or at least where I have some misgivings about this theory. So in the case of something like cigarettes, which is a very heavily litigated field that I think a lot of this social media litigation has been modeled after, there's like an addictive ingredient, right, nicotine. Everything that you put nicotine in becomes more addictive as a result of

having nicotine in it. You know, this happens with cigarettes, it happens with vapes, it happens with you know, nicotine pouches. If you started putting nicotine in ice cream, ice cream sales would

go up because nicotine is very addictive. I think the question I have about the mechanical

addictiveness of these sort of features like infinite scroll, like autoplay recommendations is that if it followed the same principle as nicotine, then every product that has those would become way more popular. And one example I've been thinking about on this is Sora. They sort of took the playbook that was working for TikTok and Instagram and they put it onto a new app and the app did not succeed, right. There are other apps that have tried to mimic things like the news feed,

that have tried to mimic things like autoplay video or recommendation algorithms that have not taken off. And so I guess the question in my mind is like if the litigation over social media is modeled after the litigation over Big Tobacco, shouldn't there be like some industry-wide lift as a result of every platform trying to borrow the most addictive features of Facebook and Instagram and YouTube? I mean, I hear what you're saying. I think it's an interesting point, but I think that internet

platforms just work differently than cigarettes, right? Like because you're right, like with nicotine, like nicotine is just addictive. Now, there are people that smoke cigarettes without getting addicted to them, right? But probably the majority of people do. Social media platforms are an imperfect analog to those cigarettes. I believe that platforms need to be of a certain scale in order for them to be truly addictive in the way that these plaintiffs are now suing about, right?

There's something about the fact that there's hundreds of millions of people on Instagram and on TikTok creating content that creates that kind of infinite supply of things that you might potentially want to watch that is actually able to. But not talking about the stuff in the container, right? Well, I think that there are many ingredients that all work together, right? But you're raising a criticism that people are making of this lawsuit. Like effectively, what I hear you saying is you cannot

distinguish between the content and the content. I'm not sure. I mean, I think I'm open to the to being persuaded that you can, but to my mind, it's like one lesson that you could take from this is that it is very bad to be a popular platform that engages these mechanics to keep users coming back. But it's okay to be an obscure platform that does it because that's not going to have as much harm. So what's really sort of at issue here is the fact that these platforms are very,

very good and very, very popular at doing the thing that everyone else is trying to copy. Yes, and this is the approach that Europe has taken to regulating these platforms, right? They have certain categories. And if you are a very large online platform, then you just have more responsibility.

That makes intuitive sense to me. I think that bigger and richer and more powerful. You are the

more responsibility that you have to society, right? And so in this particular case, you have companies

like meta, which we know are hiring cognitive scientists who are working very hard to figure out all the different ways that they can hack your brain to get you to look at Instagram for as long as they possibly can. It is in their interest to get you to look at Instagram as long as they possibly can. And right now, there's just no break on that at all in our society except for this litigation. So I'm so sympathetic to these journeys that are looking around, they're seeing this completely

unregulated platform and they're saying something's got to be done. Yeah. So regardless of sort of

What our thoughts on the overall sort of legal theory here are, like, what do...

are on the platforms? If this does get held up on a peel, if these platforms are found liable for millions or potentially billions of dollars in damages against all of these people who claim that they were harmed by social media, does that mean that they have to, I don't know, go back to like the reverse chronological feed of 2008, does that mean they have to shut off infinite scroll and auto play and recommendations and all these other things? This is where it gets

really tricky. And this is like maybe the one narrow way in which I'm sympathetic to the platforms, which is, okay, the jury's have said your product is defective. What jury's have not said is,

here's what an okay product looks like, right? They're saying we don't like this sort of set of

features, but they're not saying with any specificity, like, well, how do we think that these features are interacting, right? Like, what is your actual model of the harm here? And so there is a world where the platforms feel like they have to comply and they maybe start picking off some of these features one by one, like, okay, if you're like under 16, we'll disable infinite scroll,

for example, how much benefit does that really have to like the individual teenager who may be struggling?

I don't know. This, of course, is why it would be great if Congress could pass some sort of law regulating this, but, you know, we're now like, I don't know, a decade into that project and still not getting very far. Yeah, I mean, I think one prediction about how this will change

platforms and their behavior is that if you start talking about gambling or addictiveness on an

internal meta chat room, you just immediately get fired. There's just like a little button on your seat that just presses and you get ejected out of the building. Yes. It's like, because so much of the the incriminating evidence here just comes from people like spouting off and work chat room as well. Like, oh, it really seems like this thing we're doing is dangerous and like, I have to imagine that if it hasn't happened already, they're just going to absolutely crack down on

that kind of internal discussion. Absolutely. Well, so I want to hear a little bit more about how you think about this because you have talked on this show many times about your own struggles to look at your phone less. This is an issue that, you know, it varies times you feel like has plagued you. So how are you feeling about the addictiveness of these platforms? Like, do you buy

the sort of public health framing for the way that people are talking about them these days?

Or do you think that this is overreach? So I need to do some more thinking about the product

harm arguments here and whether it makes sense to me. I am basically on board with the idea that

there should be age gating for social media. I am sold on the premise that there is a certain age, whether it's 16 or 18 or 14, where sort of the the most harmful effects taper off and I think before that age, it makes total sense to age gait or at least give parents a lot more control over what their kids are able to do and not on these platforms. I think the addictiveness question is just hard for me because I feel like my sort of macro theory on all this stuff is that

what is happening to social media over time is that the social part is fading away and the media part is rising in the mix. And so I think that if you start treating the design and mechanical decisions of these media platforms as harmful under the law, it just sort of leads me into a place where I become much less certain. Like before any of this existed, there were cliff hangers on TV shows that were designed to keep you coming back after the commercial break or to the next week's

episode or whatever. Those were arguably addictive features. They would keep people coming back.

Is that illegal? I would say probably it shouldn't be. And it's not. So I think there is a

certain sense in which the closer that social media moves to something like TV or streaming video, the blurrier the lines in my mind get between the content and the mechanics. What are your thoughts on that? Well, I have to disagree. I do think cliff hangers should be illegal because I want to know what happened. I don't have to wait till the fall to find out, you know, if that person is still alive. But also, I do think that there are some really important differences between, like, let's say,

YouTube and HBO Max. Right? Like, HBO Max is not like going to modify the content of HBO to your individual preferences. Right? Like, they're going to go pay some money for a bunch of shows and they're going to hope a bunch of people watch them. The platforms that we're talking about do is going to be very different. Right? They're looking across the entire corpus of like every video that's ever been uploaded to their platform and they're trying to figure out what will keep you personally here,

the longest and we're going to show you that as much as we can. So I just do think that there's a kind of categorical difference here. And while I do think people should have broad freedom to, you know, look at whatever they want, I do think that at a minimum, we should probably place an age gate on it for the same reason that we don't let 14-year-olds walk into bars. Right? Unless they're really cool and have a big idea. So talk about the encryption piece because you had a lot about this in

your newsletter that I didn't quite understand. But what is the encryption debate that's part of

These losses?

of these jury verdicts, which I am. But I do want to acknowledge like this could lead to some really

bad places. Like, and that's why we need to handle section 230 with care. In the New Mexico case,

the attorney general argues that a reason that meta should be considered liable in advertising their platform as being safer children is that it includes encrypted messaging, right? In fact, meta in March, and now that they would discontinue encrypted messaging on Instagram, in what I believe was an effort to sort of get ahead of this, what they said was, look, if you want to do using encrypted messaging, you can use WhatsApp instead. But to me, this would be like just a legitimately

horrible outcome of all of this. It is if like every company that now in offers encrypted messaging either voluntarily decided to stop offering it or was pressured by the government to stop offering

it because in my view encryption is a necessary part of privacy and a world where people are

mostly communicating online. Right. Are you comfortable with all this happening in the courts

through jury verdicts? This is not my preferred way of addressing this. But I think it was inevitable

in part because the tech companies have been so obstinate about making meaningful changes to their platforms, right? Like, societies across the world have been begging these companies for a decade please do something to make these platforms safer and to make them less addictive and to reduce some of the harms. And instead what we've mostly seen is a series of engagement hacks designed to get people to look at them longer, right? And in the United States where you cannot regulate

the content of any of these apps for the most part, you can really, you're really only left with the design, right? You're really only left with just the raw mechanics of the app. So if the social media platforms are upset about the verdict here, I truly believe they brought this on

themselves. I mean, you asked me about my own experience of screen addiction and I've never been

sort of a total screen addict. But I've struggled, like, I think many, many other people have

with, like, how much I'm using my phone, how much I'm using various apps. I have come up with convoluted ways of trying to reduce my screen time. You once were six hours late to a hard fork taping because you wanted to find out what happened to Chimpenini Binance on TikTok. I thought we are good to keep that private. But like, never in all my struggles with screen time, have I thought to sue the companies that were making the apps that went on my phone. And I guess it's different

when you're talking about kids, but like, there is some part of me that just feels like, well, it just feels like an easy way out, you know, blame the platforms. And look, I think these platforms absolutely have culpability here. I am not saying that I disagree with these jury verdicts. I think that these platforms, especially meta, have done the research, have found the harms and then have shielded them from the public. But I just, I guess I'm, I'm thinking about my own

experience of these addictive platforms being one of, like, feeling bad about myself and rather than trying to, you know, find someone else to blame. Yes, but you also had the benefit of beginning to use these platforms when you were already an adult, right? Like, your hippo campus was formed. And I think I was on instant message from a very early age. Do you really think that, like, messaging apps are, like, as addictive and harmful in the same way? It's like TikTok or Instagram.

Oh, my God, take me back to 1999, put me on AOL instant message. I could not tear myself away from that thing. I had to put up a little message with, you know, get up kids lyrics on it. Every time I left the computer because it was such a rare event. And I wanted my friends to know that I was away from keyboard. Yeah. Okay. See these things were addictive. The, the kid got up. Uh, so get up kids, joke. Um, yeah, let, look, I just think that, like, messaging apps are different from, like, these,

these social platforms. And I think, you know, honestly, like, I will be curious, you know, you know, who knows if Instagram and TikTok will be what they still are in, like, 10 years, maybe when your son is ready or wants to use social media. But I just think that it, it, it probably just feels very different than when you're a parent. Yeah. Okay. See, are there any new social media apps that you're addicted to? Um, uh, it's called Claude. And, um, it's really weird. Do you want to talk about

the AI? Yeah. Yeah. So obviously, every discussion on this show has to come back to AI at some point. So I'm curious, like, what effects you think this might have on some of these AI companies, because they are also trying to create experiences that are engaging addictive, whatever you want to call it. Yeah. I can imagine some of these, uh, you know, lawsuits that are being brought against the makers of chat bots for harms, like, the, it all feels like it's sort of gonna converge at some

point. So what's your take on that? Yeah. So Pew did a study in 2025 and found that 64% of teens

Use AI chat bots about three in 10 use them daily.

of YouTube TikTok Instagram and Snapchat had remained relatively stable. Right. So yes, chatbot usage

is growing. It has not yet come at the expense of the social platforms. Although, of course, I expect that we'll soon see chat bots inside all of those platforms, right? And, like, these things will all just kind of merge together. There's something about these things where they do kind of go hand-in-hand,

and to your point, like, I think that, yes, AI chat bots will be the next frontier of this debate,

because in many ways, they're much more engaging and, and I think, like, we'll be stickier than even these platforms are. Yeah. I mean, it just seems so obvious to me that the platform should be, like, absolutely begging Congress to regulate them because the alternative is like, they just get

suited into oblivion by a bunch of, you know, law firms. I mean, absolutely. Like, if I were running

one of the big AI labs, I would want to have an understanding from Congress of, like, what do you consider a safe chat bot? Like, give me a checklist that I can, I can follow, because I don't want to have to be dealing with this, and, you know, next few years. Yeah. Okay. See, what's an addictive engagement mechanism we could use to get people to come back after the break? Well, we could study their behavior and weaponize it against them? Good idea. When we come back, Sebastian Maliby,

author of the new book, The Infinity Machine joins to talk about Demis Hassabas, Google Deep Mind, and the Quest for Super Intelligence. I'm Robin, and I am excited to open my crossplay app. I'm challenging John. My colleague at the New York Times. Robin played the word "grunge," which has a G, which is four points. She got that triple word multiplier. I'm going to take facts, and make it facts is for 30 points.

I might just take another two-letter word here with Woe, gets me at 23. I think this will put me

back in the lead if my maths are mathhing. I like to play it more from a strategic point of you and see where I can block the other player from scoring high. I'm pretty competitive. It's

fun to beat friends and co-workers and also get to learn new words. Crossplay, the first two-player

word game from New York Times games. Download it for free today. I think he thinks he has this in the bag, but I'm not so sure. Well, Casey, if our listeners read one book about AI this year, it should be mine. But if they read two books, the second one should be Sebastian Maliby's new book, the Infinity Machine, "Demise the Sobis Deep Mind" and the Quest for Super Intelligence. Tell us about this book, Kevin. This book came out this week. It is full of a bunch of new anecdotes and

stories about the work of Deep Mind and the motivations that drive its CEO, "Demise the Sobis." Sebastian is a long-time journalist. He's fellow at the Council on Foreign Relations and he's

been a long time with Demis and the people close to him and brought us this book about what I think

is the AI frontier lab that gets the least coverage relative to its importance. Yeah, and it would look. I mean, Demise the Sobis is a singular figure he's been on hard for several times, but Sebastian went really, really deep and I think maybe gave us the most fully featured portrait of the man that we have had to date. And before we bring him in, because we're going to talk about AI, let's make our disclosures. I work for the New York Times, which is suing Open AI,

I'm Microsoft and Proplexity. And my fiance works for Anthropi. Sebastian Maliby, welcome to Hard Fork. Great to be with you. So people who listen to our show, our familiar with Demis, Hesavis, and Deep Mind, he's been on several times. What is something non-obvious about Demis that you learned through talking with him through many hours and interviewing many people who know him? I mean, I think maybe the spiritual

underpinning for his scientific curiosity was interesting. You know, there was one time when we were sitting in this London Park and talking for a couple of hours and he suddenly started saying, "When I'm up, at two in the morning, at my desk, by myself, thinking about science, thinking about computer science, I feel reality is screaming at me, staring me in the face, waiting for me to explain it." And he calls it the God of Spinoza that this is the

70th century philosopher Spinoza who said that to understand nature is getting closer to God's creation. And that resonates with Demis, maybe that's something people don't know. That's interesting. I mean, yeah, this has been something that's come up in my own research too,

Is that, you know, he grew up going to church, I believe, with his mother.

other AI leaders has a way of sort of fusing the science of AI with his own spiritual beliefs. And I know some folks have seen his ambition and his many years of competing to build AI and have seen something suspicious in that, right? Elon Musk has this whole theory about how Demis secretly wants to be an evil AI dictator who takes over the world. And I guess I'm curious if in any of your reporting with him, you ever saw something that seemed like what Elon Musk was talking

about. No, I mean to the contrary, I think this idea that Demis is a quote, "evil genius," which is

the one that's the phrase that Elon used to use, came from the fact that in his video game production

days, Demis had created a game called Evil Genius. And so maybe it was a joke at first, but really,

I got to know Demis extremely well. I spent more than 30 hours with him. He stressed us people quite deeply as you know, Kevin, when you're writing about them and then you make it push back and legal threats and all that stuff. And he did make me talk to his lawyer once and it wasn't totally easy the whole time, but he was reasonable in the end. And why did it make you talk to his lawyer? Yeah. He was very mad at the fact that I unearthed the whole story about DeepMind trying to

spin out a Google between 2016 and 2019. And you know, they retained a whole bunch of advisors, lawyers, bankers, et cetera. They got read Hoffman to pledge a billion dollars to finance the spin out. They went to see Joe Tsai in Hong Kong, the Alibaba, co-founder. Anyway, so the lawyer was not amused that I had all these internal documents from inside DeepMind, which had been leaked to me, the board presentation that DeepMind gave to Google. And so forth. And he said, you're not supposed

to be writing about this. And I said, well, you know, people gave me this stuff and tough. So there

were moments of free and frank discussion. I have always believed that when a source gives you secret

documents, it helps you get closer to God's creation. That's what I wouldn't told him. I want to

ask another question about childhood, because Demis told you that he'd really identified with the boy genius protagonist of the novel Ender's Game. And relating to this feeling of being socially isolated by his own talent and consumed by a desire to make his mark on the universe. And the reason it struck me is that in this novel, Ender believes that he's doing training exercises, but then what he thinks is like a test, essentially a video game, accidentally wipes out an

alien species. So I wondered if you talked with him about why he relates to that story and in particular, if there's any relation to that and the idea of maybe trying to build a superintelligence. Well, I was astonished. You know, this was before my first dinner with him. And it was soon kind of the vetting process. It was the last part of the vetting process where you agreed to give me the access I needed. And he said, you know, you got to read this novel before you come and see me.

And so I show up. I've read this story. It's about a diminutive boy genius who basically saves humanity from aliens. And I'm thinking, does he really see himself as saving humanity by doing what he's doing with AI? And even if he thinks that, why would be, say, he's so crazy as to tell me. I mean, surely that's hebristic beyond belief. Why would you put that out there? And, you know, he made no secret about you. So, yeah, you know, I feel like I didn't identify because this guy put all of his

energy and his life into saving humanity. And I feel like I'm on a mission like that. And he said, I felt so strongly about this. I gave it to my wife to read it, thinking that she would understand me better and sympathize with me. And you know what, she sympathized with the kid end up, but not with me. That's not fair. Yeah. I mean, one other character trait that comes up over and over again in reporting about Demis. And especially in your book is how competitive he is.

This is a guy who loves to win. You know, he was a child, tests, prodigy. And he won this thing called the Pentamind, you know, five times, which is sort of like an all around gaming competition. Do you

think that is part of his approach to AI? I mean, he's always talking about how he wants to use

this to solve scientific mysteries and cure diseases. But is some part of it just like, this guy loves to win. And this is a really big contest. Totally. I mean, that's exactly right. I'm been going to see him, you know, when Chatshi BT was just going viral. And he said, you know, Sebastian, this is war. These guys at Open AI, they've parked the tanks in my front yard. He actually said park the tanks on my lawn because he's English. But yeah, you got it. You bring up

the release of Chatshi BT, which happens in November 2022. And I'd love to hear a little bit more

about how Demis had reacted to that. Because I think before that happened, Google really thought

they were comfortably in the lead and did not seem to be feeling a lot of pressure to release

Anything.

they sort of let Samoltman beat them to the punch. Yeah. I mean, he has an explanation more than a regret. And the explanation is super interesting. It's basically that because he studied neuroscience for his PhD. And you got to remember this is back in, you know, 2008, 2009. So nothing worked in the AI. So he was starting from scratch. And one of the ideas in neuroscience is called action

in perception. And this is the idea that to really be intelligent, you have to take action in the

world. You don't know what it means for something to be heavy unless you pick it up. You don't know what gravity is unless you actually drop something. And so he had this idea when the transformer paper came out in 2017. And open AI was starting to do the first GPT in 2018, second on in 2019 and so

forth. You know, that's not going to work. It's not going to take you all the way to powerful

intelligence because language is just a system of symbols. It's not grounded in the real world. And it's not that he was wrong in the sense that now we see world models come back in 2020, six as a big area of excitement and research. But back in 2018, 2019, he was missing the fact that a huge amount of knowledge about how the real world works is, in fact, in language, if you download or the language on the internet. And he missed how much you could squeeze out of

language as a training set. Yeah, I mean, I want to run a theory by you, Sebastian, for your take. But as I've been working on my own book in and about this sort of period at Google and it open AI and at DeepMind, it strikes me that there are sort of like two visions of what intelligence is that these companies disagree on. And in one vision, it's like, intelligence is about winning. It's about optimization. It's about a contest between rival

intelligence is and that's very much like the DeepMind sort of reinforcement learning paradigm, which is like AlphaGo and you play a board game a bunch of times and you get better at it a little more every time. And then there's this other view, which is sort of the more open AI sort of language, model scaling paradigm, which is like, no, it's about answering questions, like being very smart is about having the right answer to everything. Does that theory hold water with you that there's

something like psychological about these two approaches to AI development that actually are rooted in like what we think intelligence actually is? Yeah, I would say that the DeepMind special source right from the beginning was to try to put those two things together. It's interesting, for example, that with AlphaGo, the early research on that, Ilya Satskaya for contributed to it. And of course, he was, you know, the sort of leading practitioner of deep learning went on to be open AI's

chief scientist, but at the time he was working for Google because Google had acquired his boutique. And so the reinforcement learning people in London, working for DeepMind,

collaborated with the deep learning people in Mountain View and that's what produced the AlphaGo

breakthrough. So I think, I think you're right, there are these two strands within AI of reinforcement learning, which I would describe as learning through experience interaction with the real world through trial and error. And on the other hand, learning through data, and that is the deep learning. And for humans, you could think of it as being, you know, you can go to the library and read all the books and that would be deep learning, you're learning from data, from from sort of crystallized

human knowledge, or you can get out there in the real world and learn about stuff by planting your

garden and whatever, you know, actually. Yeah, you can be like, Kasey, who's never read a book.

So I'm going to get around to it when he's by trial and error. Yeah, so we're sort of the two approaches here. You mentioned earlier this, uh, I don't know if it's fair to call it a plot. It's sort of seems like a plot that they had at one point after they had gotten acquired by Google to try to

spin themselves out. I believe they called this project Mario. I would love to hear a little bit

more about how that came about and why they didn't go through with it. So what happened was that when they sold, uh, deep mind to Google in 2014, they had a rival offer from Facebook and Facebook actually offered them more cash. And one of the reasons they said no was that they wanted safety protections around their technology. And so they had this deal that was going to be a safety

in ethics board and Google promised that and they went ahead and sold to Google. And they had a first

meeting of the safety in ethics board in 2015 after the acquisition. And in order to like bind in the other people in the space, they got Elon Musk to host the whole safety in ethics board at SpaceX. They got Reed Hoffman to show up and you will notice that then these are the characters who either found open AI or fund it in those two. So Google wasn't best pleased. As you kind of

Imagine.

maybe not the people I would have put on my ethics board are these characters. But it's a dichotomy, right, dilemma. I mean, you know, either you put people on the board who don't know what they're talking about and they're not interested in AI. Well, they do know about AI, which guess they're going to go and do their own thing because it's too exciting not to. And now, fundamental mistake that Demis made in his early conceptualization of how AI would be developed

was this notion that there would be one single lab producing AI on behalf of all humanity. And therefore it could be safe because there'd be no race dynamic. And you could take your time

in sort of redeeming the models before you release them. And that's why he brought Musk into the

tent. That's why he brought Reed Hoffman into the tent precisely because he thought we could all be one team together. And so then what happened after to answer your question, Casey. So what

happened after was that having lost that first experiment in setting up a safety in ethics

oversight board, Google didn't want to do another one and really deep minds project project Mario was to try and force them to do more by threatening to walk out if they didn't. Why did they call it project Mario? Was that about the video game? Good question. I don't know the answer. Sorry. I failed. It's much better than the alternative project Mario they were working on, which was just the evil version of that. So how does Google get them to a band in this plan?

You know, it's a tradition. Sundapichai, his personality and his management style comes out quite interesting me in this whole story because, you know, right at the beginning in 2015

when, you know, the first safety in ethics oversight board fails. The next idea that Demis has

for how to get some independence and control of the technology is to become a bet as in an alpha bet when they were spitting out way more and some of the other side bets they had. Larry Page was cool with this and he was seeing it at the time. But then right as these discussions were going on, he handed over to Sundapichai. And Sundapichai kind of pretended to say, "Oh yeah, absolutely great idea we should look into it." But really he was just spinning them along and had

no intention whatsoever of letting Demis spin out because he recognized him as the AI talent that Google was going to need in the future. And so essentially there was this long drawn out, you know, delays here and we should just look at some more details and he has another term sheet and I was given some of these tensions, like huge great documents with red lines or whatever them were, you know, one team of lawyers had come back to the other team of lawyers. And you know,

basically by 2019, everybody was exhausted, it all fizzled out and they just moved on.

There's been a lot of sort of jossling for independence within deep mind ever since the earliest negotiations about selling to Google. Given some update on how things are going with them now, like, you know, when we talked to them, they present things as being, you know, fairly like honky dory between everyone, but are there still kind of tensions and and fought lines between Google and DeepMind? Well, you know, I'll give you sort of what I would have got as somewhere between

probably true and I'm confirmed, Bruno. Is that all right? Can I have my night to do that? Oh, please. We love to gossip on this, right? We're getting still to see. So I'd say that, you know, Sergei Bryn is the troublemaker here, that he, at one of the Google Iers, I guess it was a couple of years ago, and the stage was set up for two people to be on it. There was the interviewer and there was Demis, and suddenly Sergei kind of runs onto the stage.

There had to get a third chair, and then he kind of inserts himself into that conversation.

You know, what I hear is that that was the outward symptom of a much deeper tension where Sergei doesn't really like Demis's leadership on this and wants to push back against it.

And I think it follows from that that the single most important business

body act in order of capitalism today is the one between Sundar Pichai and Demis Sabis. Because Sundar manages the board, manages the sort of high politics of Google and alphabet, that Demis has the space, the resources, the oxygen to go to his science. And without Sundar holding that all together, we might be in a different place. Yeah.

One area where Demis has changed his mind is about the use of AI in the military. This was a big sticking point in the negotiations with Google and Facebook back when they were selling DeepMind. He didn't want their technology to be used for the military. Now, obviously Google DeepMind has one of these Pentagon contracts. They're working with the military. So, what do you attribute that shift in his thinking to is it just kind of the realities

Of the market or needing to compete or what is it?

Yeah. I mean, Demis described this to me as, you know, you mature. You get to know the real world or that. You, you, you one might say,

how come you went mature? You sold the company in the first place. I mean, surely it was predictable.

But I think that the real truth of the matter is he did not predict. I mean, it comes back to this

single-ton idea, which I mentioned before. He really thought there would be one lab. And in a scenario where there's only one lab who's got the technology, then sure you can say to the military, you can't have our technology go away. And the problem today is that we saw with Anthropic just now with the Pentagon, if Anthropic tries to draw a red line, you know, open AI's and then like a shot and says,

"Hey, Mr. Pentagon, what do you need? We've got it for you." Do you worry that Demis is competitive streak or his pursuit of science,

whatever it is that drives him, will compromise his ability to develop something like AGI safely?

You know, I asked myself that question all the way through my research and in some ways the question about, "Can you be a strong, consequential actor in the world and still be good?"

Is sort of the deep question in the book. And he is somebody who really wants to be good. And I think

one way of framing this question about, "Is he being good? Will he be good? Can he be good?" Is to say, "Should he, will he do what Darya did, standing up to the Pentagon about red lines on military usage and surveillance?" And I don't think he is going to do that. And I think the way he would rationalize this would be to say, "Look, you got to pick your moment with this stuff. If you make a stand and actually the Pentagon does what the hell it wants anyway,

you didn't really make the world better. My best shot at making the world better and making AGI

safer is to go through the route which is the only route that can get us to AGI safety and that is government intervention, forcing safety rules on all the labs at once. Because otherwise, some are safe, some are not safe and the ones that are not safe are going to screw it up for everybody. And that's the route that I think Demis wants to push. Problem is you have the Trump administration,

they just want to etc. And so all you can do for now I think is to keep this conversation alive

with other governments. And then maybe when there's a new administration in the U.S. we could see a conversation. You write that Demis used to inform job candidates at DeepMind that if they signed on, they should quote, "prepare for a climactic endgame when they might have to disappear into a bunker." Why would they have to disappear into a bunker? And do they still tell the job candidates that? Yeah. So the idea was when you get very close to AGI and it's super dangerous,

you're going to AB subject to potential attack by bad guys who want to steal the technology. And B, you really don't want to be distracted by quartidian real-world stuff. So you just take it on to the desert. Yeah, you leave your tiktok on your fern and some, when I think Kevin used to lock his fern up in the box as I recall. That's correct. And so you do a Kevin and you go and you really, really focus and you really get the AGI right in the last stages. That was sort of Demis's vision.

And to test whether he really meant it, I was having dinner with somebody who used to be at D-Mind in that period around 2015, 2016. And had now left and I said, "This wasn't really true." He didn't really, "Oh yeah, yeah, this guy said to me, "If Demis had told me any time I was working at D-Mind that I had to take the next flight to Morocco and hide, I would have said I'd been given fair warning." Wow. So the bunker is in Morocco just so everyone knows.

Yeah, and I said, "Why Morocco?" And he said, "Well, you know, it's the desert." And you know, the Manhattan Project was in the desert. Oh, it's the open-highness syndrome. These guys in their Manhattan Project analogies, man. I don't know if they read to the end of that story. I just got that well. Um, Sebastian, you spent many years writing about hedge funds. And I remember encountering your work back when you were writing about hedge funds and hedge fund managers.

You're now spending time with the new masters of the universe. And I'm curious, what if any observations you have about how those two classes of people, the AI leaders and the hedge fund managers are similar or different? Well, I would say that the hedge fund guys playing a game inside a set of steady, well-nundested rules. They're not rethinking humanity. They're not rethinking everything about society. They're not changing the way we bring up our kids.

They're not changing the conception of what it means to be human. Speak for yourself. I'm trading my kids to do algorithmic arbitrage. He's four. Terrible entities down to 100% this year. Anyway, sorry, carry on. Yeah, look, but I just think that AI is so, so much bigger than, you know, some kind of

Event-driven arbitrage or whatever you want to talk about with hedge funds.

Maybe a last question for me. I'm a question about the writing of this book and how you decided to frame it. You know, it strikes me Sebastian that we don't know how AI is going to go. You know, we don't know whether AI is going to turn out to, you know, care a bunch of human disease and usher in the utopia or usher in these like far darker scenarios.

I think it's clear that you have a lot of respect for for Demis on the work that he's doing,

but there's also this risk that things go really, really badly. So I'm curious as you wrote the book, how you approached that tension and the sort of not knowing of how history is going to judge this person who you've now gotten to know so well. I thought of the book as a book about that tension. In other words, I'm trying to do a portrait of somebody who has his hands on the 21st century version of the nuclear material, who has that tingling sense of playing with something that

could destroy humanity. What's it feel like when you're creating that? Can you sleep? How do you live with it? And I think I've delivered a portrait of somebody who's in that hot seat and hopefully that remains interesting for some time and it's not something that depends on how this AI development story ends. Well Sebastian, thank you so much for coming on. The book is called The Infinity Machine and it is out now. Thank you Kevin and I'm Casey. Thank you. Thank you Sebastian.

When we come back, I gave a hatchy PT. It involves snowmen. Would you like a build one?

Hmm. I don't think so. I thought it would happen to Olaf. In theory, I knew that this kind of thing can happen in any family.

Upstanding citizens are always turning out to be secret criminals and I wouldn't even call

my cousin Alan and upstanding citizen. But it's one thing to know and another thing to understand. Alan, murder, me. What the hell was Alan thinking? From serial productions and the New York Times, I'm Emma Gesson and this is the idiot. Listen, wherever you get your podcast. All right, Casey. Well, we took a little break last week and there's been a lot of tech news so we feel like we should do a round-up and play a round of hatchy PT.

Hat GPT, of course, the game where we put recent news stories into a hat. Draw slips a paper out of the hat, discuss them and then when one of us gets bored, we say to the other stop-generating. And if you can't see us, we're using the hard fork hat official merch. And Casey appears that these are sold out in the New York Times store that not that specific hat, which was of course a hard fork live exclusive. Yes, this is an exclusive. You can't get this one,

but you also can't get any of the other ones. Here's the important point. You cannot get a hard fork hat anymore, so stop trying. Now, someone did suggest to me the other day that we should make hard hats for hard fork like a yellow construction vibe. Well, we can wear them over to the new studio, which is being built for us, right? That's true. Do you think we should make that? Yeah, hard fork hard hat. That's a perfect piece of merch. Great.

All right, Casey, you go first. All right, Kevin. This first story comes to us from 404 a media, an AI agent was banned from creating Wikipedia articles, then wrote angry blogs about being banned. I feel like I've heard something like this before. So, Kevin, once again, agents are

writing blog posts, what do we make this? This would never happen on Gropopedia. No, I look,

I think this is just going to be the year that every system on the internet that is built on human contribution and review is going to break. And it will break not only because the AI tools, but because people are letting them loose onto websites where they are doing things like editing Wikipedia articles and defaming people who contribute things to GitHub projects. We heard from Scott Chamball about that on a previous episode. But I think this is going to be a challenge. I have

started talking about the inbox apocalypse that is going to hit this year where everything that is normally sort of reviewed and bottlenecked by humans is just going to be overwhelmed and fluttered with the as emissions. Absolutely. I mean, I'm already getting emails now every week from something

claiming to be an AI agent that says, you know, it's running a company, you know, but it's always

sort of like, let me know if you want to talk to my human. And I was like, your human better hope,

I don't catch them in a dark alley because this is not prolonging my inbox or frankly anywhere. Yeah, I'm getting these two. It's like it's a total scourge. It's somehow even more annoying than

The like faceless PR spam that you and I got.

thing that anyone's agent could do or say to get me to respond to it anyway. So, you said

information what you, I hope that goes into your training data. Stop generating. All right. Next up.

This one comes to us from Sean Hollister at the verge title. I met Olaf, the frozen robot who might be the future of Disney parks. Sean reported in mid March about his interaction with a new animatronic Olaf the snowman robot from frozen. It's weighs 33 pounds. It was trained with an Nvidia GPU and is controlled by an operator using a steam deck. But when it made its debut at Disneyland Paris, well, Casey, something happened. Should we take a look? Let's take a look.

All right, Olaf the snowman is talking, waving his stick arms. Oh no! No! We lost him. Olaf! Oh, the carrot knows falls off. Oh, oh, it's oh. There's something about the way that he very slowly falls onto his back. Oh, no. Yeah. 20 children just got lasting trauma. They're going to be talking about this in therapy.

Look, what do you expect? Like, of course he was frozen. That's what the whole movie is about.

Do you want to kill a snowman? Okay, I mean, there is, it's just reliably very funny when you create an animatronic thing for a child and then it is like reveal to be a machine and it just sort of feels like a love crafty and horror. Yeah. It's like something about that transition from like a cutesy, cutesy thing to like its eyes are, you know, bulging out of its head

and the sparks start flying out of the back. I'll never forget the day at Chuck E. Cheese as a kid when I

learned that the guitar playing the house wasn't real. You know, Chuck E. Cheese is full of government name, right? What is it? You don't know. It's, there's a nod of joke. It's Charles Entertainment Cheese. Come on. It's where to go. I learned something every day for me. Stop generating. All right. Now it's my turn. Well, this Kevin is a story about the Claude code leak. So Kevin, what do you make of this Claude code leak? Well, I think it's a big deal in part because the

agentic sort of coding harness that is around Claude code is really the special sauce, right? It's the model underlying it is part of what makes Claude code and other agentic code

systems good at coding, but it's really all the stuff around it and that's what leaked. It is not the actual

like weights or the source code of Opus 4.6 or whatever model people are running inside Claude code. It's like the sort of apparatus around it that makes it quite effective. So within hours of this leak, there were people who had cloned it and set up their own versions of it. I imagine it's a very busy week over at the Anthropic Legal Department trying to get all the stuff taken down. But look, I think this kind of thing was inevitable, maybe not at Anthropic, but like the

agentic coding tools were all going to get good. They were all going to sort of reverse engineer Claude code and figure out what made it better, but I think this probably just accelerated that.

When I saw this, my first thought was right now Kevin raises some where vibe coding Claude code

using the downloaded leaked Claude code harness. I have not yet downloaded the leaked Claude code harness, but I have seen other people sort of taking it and then putting it on top of like an open source Chinese model or something. It's our frank and standing their own sort of version of Claude code that they can run. And I will say the closer I get to my rate limits on Claude code, the more I'm tempted to do something like that. That makes sense. Here's the last thing I'll

say. If Anthropic is looking for a new harness for Claude, they might want to pick one up at Mr. S leather and San Francisco down to the full some district. Really nice options down there. All right, stop generating. Okay, okay. Next up, out of the hat. Oh, this one is good. The AI fruit drama on TikTok that's too juicy to pass up. This one says Lou, we should watch a clip from MBC News.

All right, everybody. So tonight, we are taking a look at one of the most popular shows circulating on TikTok that's causing a lot. Let's just say some juicy drama. Because this stars the show are AI generated fruit. Welcome to Fruit Love Island where eight single fruits are about to flirt, fights, and trucks. Things get messy fast. The guy I want to couple up with is Ben Anita. So this is like sort of a love violence style reality show featuring AI generated fruits.

There's a very ripped banana who is, you know, attracting attention from the lady fruits. And it's all very silly. But this is going mega vibe. This is this is the big new trend. I just watch a banana kiss a pineapple. And that's not in the Bible. Do you think I could win a multimillion dollar jury verdict for being forced to watch that?

I'm calling my lawyer.

My mental health did not improve watching fruit love island.

Watch what happens with the passion fruit in season three. All right, stop generating.

This company is secretly turning your Zoom meetings into AI podcast. This one also comes to us from 404 media. And here's a name for a company webinar TV. Wow. Two great times that taste better together webinar and TV. It's been a worse word of the English language than webinar. Not to my knowledge. Apparently this company is secretly scanning the internet for Zoom meeting links, recording the calls and turning them into AI generated podcasts for profit Kevin. Oh my god.

In some cases, people only found out that their Zoom calls were recorded once webinar TV reached out to them to say their call was turned into a podcast in an attempt to promote webinar TV services. Wow. What is happening? What is happening? Okay, I want to start by saying yeah. I'm committed to making a podcast with you for the rest of my life. But if we ever get overtaken on the charts by an AI generated webinar TV podcast that's been trained on people's boring

as Zoom meetings, I am leaving this industry. Here's why this is such great news. I think a lot of

podcasts are struggle with the idea that maybe their podcast, you know, maybe they didn't have a great episode, maybe they're wondering like is this thing good enough to put out on the internet congratulations because every single human-made podcast is better than every single webinar TV episode that's ever better released. Yeah, I mean, I'm just like these have to be the most boring podcasts ever created. What are you going to talk about? Is it called action items? Is it called

circle back? What's the title of this podcast? Touch base, a limited eight part series. There actually, I heard there's a great series over on webinar TV right now. It's called "Oh, I think you're on mute." So you want to check that one out. All right, stop generating. Next, out of the head. We have North Korean hackers suspected in Axios software tool breach. This comes to us from Bloomberg and it's about Axios, not the media company. I actually would

prefer to read a story about this from Axios if you have one on hand. This is a tool and open source tool widely used to develop software applications. This has been a big security breach. Hackers were able to breach one of the few accounts that can release new versions of Axios,

lead on Monday and publish malicious versions. Axios is downloaded about 80 million times every week.

Anyone who has downloaded the malicious version of Axios could then have their own computer and the data on it stolen by hackers. This is being attributed to North Korea. Seems really bad. Yeah, man. There's a lot of cyber security incidents we'll talk about where it's like, you know, but no personal data was stolen or nothing sensitive was at risk. This is one where it's like, no, like everything was at risk. This is one of the bad ones. And if you've been

messing around with NPM over the past week, you can probably need to take a look at this.

Yeah, it's really, I think this is going to be one of the biggest stories of the year. It's just

what is happening in cyber security right now. I was watching this YouTube video. If you ever are, you know, need something to keep you up and night. Watch a talk given by this guy Nicholas Carley and use a security researcher at Anthropic at a cyber security conference recently. It is

like the most terrifying conference speech ever again. Because what he's basically saying is these

AI tools have gotten better than almost any human hacker, any human security expert at finding vulnerabilities in tools, even tools that have been around for decades like the Linux kernel. These language models are now finding bugs in them. And basically every piece of code that exists is going to need to be rewritten and substantially hardened because we are facing like an onslaught of these very sophisticated AI tools that can find every little bug and and problem in them.

Well, I am going to watch that talk is just as soon as I'm finished watching fruit love island. But, you know, the thing that this brought to mind for me Kevin was that last week while we were away, there was this Anthropic leak where someone found a draft of a blog post that said that Anthropic was delaying the release of its next model so that it could share it with

cyber defenders basically. To my knowledge, we have not seen something like this happen since

GPT 2 in 2019. One of the big labs saying like essentially we're afraid to release this thing because of what it might rot. What is the present tense? What are my reek? Yes, because of what it might reek. That's reek with the W. Yes. Speaking of reeking, take a shower next week. Hey, I was in a hurry. All right. Stop generating. Okay. Okay. So this is actually a two

Parter.

which was a prediction that I made at our year end episode. Yes, you called this one. This was my low confidence prediction for the year and it's already come true by March. And then a second

story, which I think actually crazily enough is related. Open AI has apparently shelved its plans

to release the erotic chatbot or sort of the like the adult mode that it said that it was going to be bringing soon to chat GPT in an effort to boost engagement. So Kevin, time to know what you made of those two changes. So I think you were smart to predict the end of Sora. I think

the the story with Sora never quite made sense to me. Like it was obviously a very cool piece of

technology. It was devastatingly expensive to run is my understanding like generating all those short videos was like computationally quite pricey. And so I think they are making the decision to sort of spread their bets a little less and consolidate around like a few projects, one being enterprise, AI, one being coding and sort of automating AI research. But I think they may be made a few too many side bets in the past couple of years that they are now seeing were expensive

and diverted resources away from the core. I have to say I was personally really glad to see both of these changes like like the release of this infinite slot feed app last year. And the company saying that they were going to release this adult mode while they were still having all of these issues with like psychological problems that some of their users were experiencing as a result of getting a little too close to their chatbot. I just thought both of those seem like

really irresponsible moves and just like contrary to what they said their mission was. So I was actually just really happy to see them say you know what we're not doing any of these things anymore.

Like I think that was the right move. Now did they do that out of the goodness of their heart

and some sort of like you know moral awakening that they had? No. They saw anthropic, which had started to print money because Claude Code was taking off and they said we want to get a piece of that. But hey, whatever it took, I've just glad it's happening. Yeah, stop generating. Last up in the hat. Calcy announces itself as the safe regulated prediction market in a new ad campaign. Calcy has recently been putting up a green ads around DC and I've actually seen them

in San Francisco. The first one says rule number one, Calcy bands insider trading. The second one says rule number two, we don't do death markets. Case of your take. Rule number three,

we'll always shoot you in the front never in the back. Who are these people? What? Like these ads

are raising a lot of questions already answered by the ads. Truly. Truly. It's it's just so funny to me like you know I went to this prediction markets conference like several years ago. Yeah, we're going to bring this up. I go ahead and like people from Calcy were there, people from Polymerica were there, people from all these like you know obscure like predictions. And it was like 50 people. It was like who are interested in this stuff? And it wasn't legal at the time. And so they

were all using like sort of play money and like work around. And it would it just seemed like like there no part of me was like in three years this will be the dominant industry in America. And they will be taking out bus ads to tell people that they don't do death markets. I know, but at the same time I keep reading all of these like stories and blog posts that are like you know why is this generation turning to prediction markets? Is this like really the only future they see for

themselves? It's like no they used to be illegal and now they're legal. People love to gamble

if you let them you are now letting them gamble. So that's why they've hooked this younger generation.

Yeah, you don't think it's because of the information harnessing potential and the wisdom of the crowds. I really I'm still waiting for the wisdom of the crowds on a Calcy market to improve my life. Yeah, well you're not going to find it when it comes to death or insider trading. Calcy rule number four gambling is bad. That's the add. I dare them to put up. Let's close the hack case. I love the old hat. That was HAPT. Lock, lock going on.

Lock going on. Busy week. Busy week. Never dull day here in Silicon Valley. How sir.

Hardfork is produced by Winnie Jones and Rachel Cone. We're edited by Viren Povage. We're fact checked by Caitlin Love. Today's show is engineered by Chris Wood. Our executive producer is Jen Poyon. Original music by Elisa B. E. Tube, Marion Lazano, and Dan Powell. Video production by Sawyer Rookay, Jake Nickel, and Chris Shot. You can watch this whole episode on YouTube at youtube.com/hardfork. Special thanks to Paula Shuman, Poewing, Tam, and Dalia Haddad. You can email us

At hardfork@nwaytimes.

you

Compare and Explore