The best of the best of the best of the best of the best of the best of the b...
And with the checkout with the world-famous convention, you can hear the right sound.
The checkout with the world-famous convention. The legendary checkout from Shopify, just on your website, you can see the social media and everything else. That's the music for your ears. How do you feel about the event with Shopify?
You can help to get a real help. The start-to-date test not only to fill in an oil remmoner of Shopify.org, let's record. So much of what's happening today in the AI industry is extremely inhumane.
“But this is what I think I was going to do.”
And logically, it could be the case that the civilization that accelerates the research with AI is going to be the superior civilization. No, it's not. This is a prediction that you're making, right? It's all making. Zuckerberg's making ovens making.
And do you know what the common feature of all of them is? They profit enormously off of this myth. You know, I have all these internal documents showing that they're purposely trying to create that feeling within the public, so that they can extract and exploit and extract and exploit. So what do we do about it?
We need to break up the antipars of AI.
“You know, I've been covering the tech industry for over eight years.”
In our view, over 250 people, including former or current opening-eye employees and executives. And I can tell you that there are many parallels between the antipars of AI and the antipars of old. Right? They claim to the intellectual property of artist writers and creators in the pursuit of training these models.
Second, they exploit an extraordinary amount of labor, which breaks the career ladder.
Because someone gets laid off and then they work to train the models on the very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill. And when they talk about that there's going to be some new jobs created that we can't even imagine, a lot of the jobs that are created are way worse than the jobs that were there. And then there's the environmental and public health crisis that these companies have created.
And how they're able to also spend hundreds of millions to try and kill every possible piece of legislation that gets in their way, and will censor researchers that are inconvenient to the empires agenda. But what I'm saying is not that these technologies don't have utility. It's that the production of these technologies right now is exacting a lot of harm on people. But we have research that shows that the very same capabilities could be developed in a different way that doesn't have all of these unintended consequences.
So let's talk about all of that. Guys, I've got a favor to ask before this episode begins. The algorithm, if you follow a show, deliver you the best episodes from that show, very prominently in your feed. So when we have our best episodes on this show, the most shared episodes, the most rated episodes, I would love you to know. And the simple way for you to know that is to hit that follow button.
But also, it's the simple, easy, free thing that you can do to help us make this show better. And I would be hugely grateful if you could take a minute on the app you're listening to this one right now and hit that follow button. Thank you so, so, so, so much. Karen, how? You've written this book in front of me here called Empire of AI.
Dreams and nightmares in Samutman's Open AI. I guess my first question is, what is the research and the journey you went on in order to write this book we're going to talk about in the subjects within it today? I took a strange route into journalism. I studied mechanical engineering at MIT. And so, when I graduated, I moved to San Francisco. I joined a text startup. I became part of Silicon Valley. And I basically received an education in what Silicon Valley is about because a few months in two joining a very mission driven startup that was focused on building technologies that would help
facilitate the fight against climate change, the board fired the CEO because the company was not profitable. And this was in hindsight a very pivotal moment for me because I thought if this hub is ultimately geared towards building profitable technologies and many of the problems in the world that I think needs solved are not profitable problems like climate change. Then what are we actually doing here? Like, how did we get to a point where innovation is not actually necessarily working in the public benefit?
And sometimes even undermining the public benefit in pursuit of profit. In that moment, I had a bit of a crisis where I thought, well, I just spent four years trying to set myself up for this career that I now don't think I am cut out for.
And I thought, well, I might as well just try something totally different. I've always liked writing.
“And that's how after two years, I landed at a role at MIT Technology Review covering AI full-time.”
And that gave me a space to then explore all these questions of who gets to decide what technologies we build, how does money and ideology also drive the production of this technologies? How do we ultimately make sure that we actually reimagine the innovation ecosystem to work for a broad base of people all around the world?
So that is kind of how I then set off on the journey of ultimately writing a ...
I didn't realize that I was working towards writing a book, but starting in 2018 when I took that job was essentially the moment in which I began researching the story that I document in it.
It's a very timely time to start working in artificial intelligence. For anyone that doesn't know this is pre-opening AI, chat, GPT launch moment that shook the world. But in writing this book, you interviewed a lot of people and went to a lot of places.
“Can you give me a flavor of how many people you've interviewed, where it's taken you around the world, et cetera?”
I interviewed over 250 people, so over 300 interviews, over 90 of those people were former or current open AI employees and executives.
So the book covers the inside story of Open AI's first decade and how it ultimately got to where it is today.
But I didn't want to write a corporate book. I felt very strongly that in order to help people understand the impact of the AI industry, we would also have to travel well beyond Silicon Valley. Companies tell us that AI is going to benefit everyone and that's their mission. But you really start to see that rhetoric break down when you go to the places that look nothing like Silicon Valley, that speak nothing like Silicon Valley. And that have a history and culture that are fundamentally different as well.
And that's where you start to really understand the true reality of how this industry is unfolding around us. I often try and steer conversations. But in this situation, I feel like it's probably my responsibility to follow.
“So with that in mind, I'm going to ask you, where does this journey begin and where should we be starting if we're talking about the subjects of Empire of AI, AI, generally artificial intelligence?”
And also I'd say, one thing I'm really keen to do in this conversation, which is I often see in conversations is left out, is let's assume that our view is no nothing about AI. Yeah. So they don't know what scaling laws are or GPUs or compute or whatever. And let's try and keep this as simple as we possibly can in terms of language or explain all the complicated language so that we can bring as much people with us as we possibly can. Yes. Where should we start? I think we should start with when AI started as a field. So this was back in 1956.
“And there were a group of scientists that gathered at Dartmouth University to start a new discipline, a scientific discipline to try and chase an ambition.”
And specifically, an assistant professor at Dartmouth University, John McCarthy, decided to name this discipline artificial intelligence.
This was not the first name that he tried. The previous year he tried to name it Atomata studies.
And the reason why some of his colleagues were concerned about this name was because it pegged the idea of this discipline to recreating human intelligence. And back then, as is true today, we have no scientific consensus around what human intelligence is. There's no definition from psychology, biology, neurology, and in fact, every attempt in history to quantify and rank human intelligence has been driven by nefarious motives. It's been driven by a desire to prove scientifically that certain groups of people are inferior to other groups of people.
There are no goal posts for this field and there are no goal posts for the industry when they say that they are ultimately trying to recreate AI systems that would be as smart as humans. How do we even define what that means? And when are we going to get there if we don't know how to define the destination? And what that effectively means is that these companies can just use the term artificial general intelligence, which is now the term to refer to this ambitious goal to recreate human intelligence.
They can use it however they want to and they can define and redefine it based on what is convenient for them. So in opening eyes history, it is defined and redefine it many times. The exam Altman is talking with Congress, AGI is a system that's going to cure cancer, solve climate change, cure poverty. When he's talking with consumers that he's trying to sell his products to, it's the most amazing digital assistant that you're going to ever going to have. It was talking with Microsoft, you know, in the deal that opening eye in Microsoft struck, where Microsoft invested in the company, it was defined as a system that will generate a hundred billion dollars of revenue.
And on opening eyes own website, they define it as highly autonomous systems that outperform humans in most economically valuable work. This is like not a coherent vision of one technology. These are very different definitions that are spoken out loud to the audience that needs to be mobilized to ward off regulation or get more consumer buy-in into the industry's quest or to get more capital more resources for continuing on this journey with ambiguous definitions.
I mean, speaking about different definitions through time.
There are other threats that I think are more certain to happen, for example, an engineered virus, but AIs probably the most likely way to destroy everything.
In general, when Altman is writing for the public or speaking for the public, he does not just have the public as the audience in mind. There are other people that he is trying to motivate or mobilize when he says these things. And in that particular moment, Altman was trying to convince Elon Musk to join him on co-founding Open AI. And Musk, in particular, was spending all of his time sounding the alarm on what he saw as a huge existential threat that AI could pose. And so in that blog post, if you look at the language that Altman uses side by side with the language that Musk was using at the time, it mirrors all the things that Musk said.
Ten years ago, Musk was going on podcast saying tweeting whatever that the greatest existential risk of humanity was AI. Yeah, and so you know, like his apparent article, there are other things that we might actually be more likely to happen like engineered viruses. Just up until then, Altman had been talking just about engineered viruses. And so now that he needs a pivot to speak to an audience of one to Musk, he needs to kind of resolve the contradiction between what he's now elevating as his new central fear to be the same as Musk's new central fear with what he had previously been saying. So that's why he's like, I think this is now even though before I said this.
Are you saying that Sam Altman manipulated Musk because Elon did end up donating a huge amount of money to Open AI and co-founding it, I believe, with Sam Altman. Elon must have ended up co-founding it with Altman. And certainly from Musk's perspective, he does feel manipulated because he feels like Altman was engineering his language in a way that would make Musk trust him as a partner in this endeavor. And of course, then Musk lives and through some of the documents that came out during the lawsuit that Musk and Altman are engaged in now, it has become clear that there was a degree to which Musk was actually muscled out a little bit.
“And so that's why she's left with this very intense personal vendetta against Altman saying that somehow Altman tricked him into being part of this.”
So in 2015, Sam Altman is writing these blog posts saying this is one of the greatest existential threats at the same time. In 2015, Musk is doing some very famous speeches at the time at MIT. He said that AI was the biggest existential threat and compared developing AI to summoning the demon. And what you're saying here is you're saying that Sam Altman was just mirroring the language that Elon was using to get Elon involved in Open AI. And later it appears, and again, there's a legal case taking place now, that Sam might have muscled Elon out in some capacity.
Yeah, so we know from the lawsuit and the documents that have come out in the lawsuit, that Ilya Sutskever, who is the chief scientist of Open AI at the time and Greg Brockman, chief technology officer at the time, when they were deciding whether or not to maintain Open AI as a non-profit because it was originally found as a non-profit.
“They decided, okay, we need to create a four-profit entity, but the question was who should be the CEO of this four-profit entity?”
Should it be musk or should it be Altman? Because they were the two co-chairmen of the non-profit.
And in the emails, it became clear that Ilya and Greg first chose Musk to be the CEO.
But through my reporting, I discovered that Altman then appealed personally to Greg Brockman, who was a friend of his that they'd known, they'd known each other for many years through the Silicon Valley scene. And said, don't you think that it would be a little bit dangerous to have Musk be the CEO of this company, this new four-profit entity? Because he's a famous guy, he has a lot of pressures in the world, he could be threatened, he could act erratically, he could be unpredictable.
And do we really want a technology that could be super powerful in the future to end up in the hands of this man?
“And that convinced Greg and Greg then convinced Ilya, you know, I think there's a point here, do we really want to give this much power to Musk?”
And that is why Musk then leaves, because then the two switch their allegiances, they say, "Actually, we want Altman to be the CEO."
Then Musk is like, "If I'm not CEO, I'm out.
So it sounds like some, again, managed to persuade someone to do something.
“I guess this begs the question. What do you think of Sam Altman?”
I think he's a very controversial figure. You did an interesting pause. It's a pause where someone tries to select their words. Well, this is what's so interesting about those interviews is, people are extremely polarized on Altman. There no one has in between feelings about him. Either they think he's the greatest tech leader of this generation, akin to the Steve Jobs of the modern era, or they think that he's really manipulative and abuser and a liar. And what I realized, because I interviewed so many people, is it really comes down to what that person's vision of the future is and what their goals are.
So if you align with Altman's vision of the future, you're going to think he's the greatest asset ever to have on your side.
Because this man is really persuasive. He's incredible at telling stories.
“He's incredible at mobilizing capital, at recruiting talent, getting all the inputs that you need to then make that future happen.”
But if you don't agree with his vision of the future, then you begin to feel like you're being manipulated by him to support his vision, even if you fundamentally don't agree with it. And this is the story, especially of Dario Amade, CEO of Enthropic, who was originally an executive at OpenAI. So if people don't know Dario and Aaron's Enthropic, which is the maker of Claude, a lot of people probably are familiar with Claude. Yeah. And it's one of the biggest competitors to open AI. And Amade, at the time when he was an executive at OpenAI,
he thought that Altman was on the same page with him. And then over time, began to feel that Altman was actually on exactly the opposite page of him. And to felt that Altman had used Amade's intelligence, capabilities, skills to build things, and bring about a vision of the future that he actually fundamentally didn't agree with.
“And so that's why people end up with this bad taste in their mouths.”
And so, you know, I've been covering the tech industry for over eight years. And covered many companies. I've covered meta, Google, Microsoft, in addition to OpenAI. And OpenAI and Altman is it's the only figure that I've seen this degree of polarization with where people cannot decide, whether he's the greatest or the worst. You mentioned Dario there. And I found it really, what I found really interesting is to look at how people's quotes evolve over time with their incentive.
So I was looking at all of the, all of the things they've said on the record on podcasts in their blog post to see how it's evolved over time. And Dario, who is the former VP of Research, OpenAI, and is now moved on to Enthropic, who are taking a slightly different approach to developing AI, said back in 2017, while he was still at OpenAI, that this is a quote. I think the extreme end is the Nick Bostrim style of fear that an AGI could destroy humanity. I can't see any reason in principle why that couldn't happen.
My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10% and 25%. And also you mentioned Ilya, who was a co-founder of OpenAI, and then left.
I guess the first question I'd ask is, why did Ilya leave?
That's a great question. So he was instrumental in trying to get Sam Altman fired. And he's another one of the people who over time began to feel like he was being manipulated by Altman towards contributing something that he didn't believe in. And for you know, because I interviewed a lot of people, Ilya in particular had two pillars that he cared about deeply. One is making sure we get to so-called AGI, and the other is making sure that we get to it safely. And he felt that Altman was actively undermining both things.
He felt that Altman was creating a very chaotic environment within the company, where he was pitting teams against each other, where he was telling different things to different people. Have you ever spoken to him? I have, so I interviewed him in 2019 for a profile that I did of OpenAI for MIT technology review. I'm back in 2019. He has a quote where he says, "The future's going to be good for AGI's regardless. It would be nice if it was also good for humans as well.
It's not that it's going to actively hate humans or want to harm them, but it's just going to be so powerful, and I think a good analogy would be the way that humans treat animals.
It's not that we hate animals.
But when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because it's important to us.
“And I think by default, that's the kind of relationship that's going to be between us and A.I. which are truly autonomous and operating on their own behalf.”
That was in 2019 that you interviewed him. One of the things that I feel like we should take a step back to examine is going back to this idea of what even is artificial intelligence and what do we mean by intelligence. And a huge part of the views of the different people on the quotes that you're reading, derives from a specific belief that they each have in this question of what is intelligence, what constitutes intelligence.
For Ilya, he has throughout his research career felt that ultimately our brains are giant statistical models. This is not something that we actually know, but this is his own hypothesis.
This is also the hypothesis of his mentor, Jeffrey Hinton, who also was on this podcast. This is why they have such a strong conviction in the idea of building A.I. systems that are statistical models. And that this particular approach is going to lead to intelligent systems as we are intelligent. It's a hypothesis that they have. That one that has been proven by science and some people vehemently disagree with them on this particular thing. But if you step into their shoes and take on the hypothesis and assume that it's true that our brains are in fact statistical engines and that these systems that they're building are also statistical engines that they're making bigger and bigger and bigger until they become the size of the human brain.
They say that making this comparison where the system will become equal to human intelligence and then maybe exceed human intelligence is relevant in their framework. And Ilya gave a talk at one point at this really prominent A.I. research conference that happens every year called neural information processing systems. It's mouthful. But he gave this keynote where he shows this chart of the size of brains and the intelligence of a species and it's roughly linear the bigger the size of the brain, the more intelligent the species.
And so for him, he thinks he's building a digital brain because he thinks brains are just statistical engines. So from that logic, it's like, okay, if we then build a bigger statistical engine than the human brain, then based on this chart, it will be more intelligent and then we will be subjected to the same treatment that we've subjected animals.
“But it's really important to understand that these are scientific hypotheses of specific individuals within the A.I. research community and there's a lot a lot of debate about whether this is in fact the case.”
And some of the biggest critics say it's very reductive to think of our brains as simply just statistical engines. Why does it matter to know the mechanism? Is it not just important to know the outcome, which is that it's going to be able to do make a video for me or agents are going to be able to do the work that I do. Does it does it really, really matter for us to know the mechanism behind it? Yes, I know. So it matters because these companies, they are driving their future actions based on this hypothesis. So they have decided we think that this hypothesis is true, like we should just continue building larger and larger statistical models in the pursuit of artificial general intelligence.
And that's then having global consequences. Like in order to continue doing that, they're hovering up more and more data, they're building more and more data centers. They are having, you know, exploiting more and more labor in order to continue on this path.
“Here's a question that I think is important to ask is, why are we trying to build a system that are duplicative of humans?”
In this conversation right now, where we've just taken the premise of this industry as a good thing. Like they said that we should be building HCI. So we say that we should be building HCI.
But I would like to ask, like, why are we doing that? Why is it that we are building a technology that is ultimately designed to replace and automate people away?
That is not the enterprise of technology. Like we should be building technology and the purpose of technology throughout history has been to improve human flourishing, not to replace people. And so this is like a critical part of microchique of these companies and the scientists that have just adopted this goal and have relentlessly pursued it and have had enormous capital and enormous resources to pursue it is, is this the right goal?
Like, why are we doing this?
So why are they doing it? I mean, you've interviewed all these people. I think it's 300 people in total, 80 or 90 of them from Open AI, the maker of chat BBC. Why do you think they're doing it?
“I think it's because they're driven by an imperial agenda and that is why I call these companies Empires of AI.”
What do you mean by an imperial agenda? What does that tell me? Empire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies do and the scale that they operate and what motivates them to do what they do. There are many parallels that you see between what I call the Empires of AI and the Empires of Old. They lay claim to resources that are not their own and the pursuit of training these models. That's the data of individuals, the intellectual property of artist writers and creators, their land grabbing in order to build these super computer facilities for training the next generation models.
Second, they exploit an extraordinary amount of labor. They contract hundreds of thousands of workers all around the world, including in the US to ultimately make these technologies.
We can talk about that more. And they also design their tools to be labor automating so that when the technologies are deployed, it also affects labor rights because it erodes away labor rights. And this is a political choice that they have. Third, they monopolize knowledge production so they project this idea that they're the only ones that really understand how the technology works. And so if the public doesn't like it, it's because they don't actually know enough about this technology. They do this to the public.
They do this to policymakers and they've also captured the majority of the scientists that are working on understanding the limitations and capabilities of AI.
“You think they're gaslighting the public in a way? They are. Yes. So if most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?”
No. And in the same way, they employ and bankrupt the AI industry, employees and bankrolls most of the AI researchers in the world. So they set the agenda on AI research in softwares, simply by funneling money to their priorities, so that only certain types of AI researcher produced, but they also will censor researchers when they do not like what the researcher has found. And so I talk about the case of Dr. Timmy Geberru in my book, who was the ethical AI team co-lead at Google when she was literally hired to critique the types of AI systems that Google was building.
She then co-wrote a critical research paper that was showing how large language models specifically were leading to certain types of harmful outcomes.
And in an attempt to try and stop this research from being published, Google ended up firing Geberru, and then fired her other co-lead Margaret Mitchell. And so they control and crush the research that is inconvenient to the Empire's agenda.
“What do these do in us for information e-mails, technical messages, and this person was from one of the big AI companies?”
As part of what appears to be a campaign of intimidation, but also what appeared to be a campaign of fishing for more information to figure out to map out the network of critics further. But this was a man who runs a small watchdog nonprofit, and they had been doing a lot of work during that time to try and ask questions about open eyes attempted to convert from a nonprofit to a for profit. Ultimately, opening eye was successful in that conversion, but during the period where it was sort of existential for open eye to complete this conversion, there were a lot of civil society groups and watchdog groups like mightest who were trying to prevent the process from happening in the dead of night.
And trying to get more transparency, they were trying to have more public debate about this because it's unprecedented, and it was then that there was a knock on his door and he was served papers. What do the papers say? The papers ask him to reproduce every single piece of communication that he had had that might have involved musk, so this was like the strange paranoia that opening eye had that musk was somehow funding these people to block the conversion none of them were actually funded by musk, so in this particular case, there request he simply was just answered, you know, I don't have any documents because this doesn't exist.
That's the point of empire, you're saying that one of the factors of an empir...
Was labor exploitation. Labor exploitation.
The third one, controlling knowledge, production.
“And one of the other ones that's really important to understand about the AI, Empires in particular is empires always have this narrative that they say to the public, like, where the good empire.”
To be an empire in the first place because they're also bad empires in the world, and if you allow us to take all the resources and use all the labor, then we promise we will bring you progress and modernity for everyone. We will bring you to this utopic state akin to an AI heaven, but if the evil empire does it first, we will descend into a hell. And you will empire being in this case. In this case, most often it's China, but actually in the early days, open AI evoked Google as the evil empire. So all of their decisions were about we need to do it first, because otherwise Google, this evil corporation that's driven by profit, us as a benevolent non-profit.
Like this is a critical contest of who wins.
Do you think the people building these AI companies believe that the outcome is going to be all good? Now, do you think they think that it's going to be, it's going to serve everyone, it's going to be the age of abundance, everything's going to go well, which I think they believe. So this is so funny, is such a core part of the mythology that they create around the AI industry includes the belief that it could go very badly. It goes hand in hand.
“Like they need that part of the myth in order to then say, and that's why we need to be in control of the technology, because that's the only way that it's going to go really, really well.”
And Altman has said publicly, you know, the worst case lights out for everyone, but best case, we care cancer, we solve climate change and there's abundance.
And Dario Amade, same kind of rhetoric, who's like worst case catastrophic or existential harm for humanity, best case mass human flourishing.
So, this is like two sides of the same coin, like they have to use both of these narratives in order to continue justifying and extremely anti-democratic approach to AI development, where there should not be broad participation in developing this technology. They must be the ones controlling it at every step of the way. Salmottman did a tweet saying, "There are some books coming out about Open AI in me. We only participated in two of them. One by Keshe Hegey. Heech Hegey focused on me, and one by Ashley Vance on Open AI."
He went on to say, "No book will get everything right, especially when some people are so intent on twisting things, but these two authors are trying to." You quote, "retweeted them." Tweet from Salmottman, and you said, "The unnamed book empire of AI is mine."
“Do you believe that tweet from Salmottman was in reference to your book?”
100% because there's only three books coming out about him. And he'd quote, "When that your book was coming out." He knew my book was coming out because I had contacted Open AI from the very beginning of my process and said, "I'm working on a book now, will you participate in it?" And actually, initially, they said, "Yes, even though my history of Open AI, I profiled the company for MIT technology review. I embedded within the office for three days and 2019. My profile comes out in 2020. The leadership are very unhappy.
And in my book, I actually quote an email that I received that Salmottman sent to the company about my profile saying, "Yeah, this is not great." And from then on, the company's stance to me was we are not going to participate in anything that you do. We are not going to respond to anything, any questions that you receive. And this was, you know, this was things that they explicitly articulated. It wasn't like me inferring. So I had a colleague at MIT technology review that also covered AI. And at one point, Open AI sent him this press release being like, "We love for you to cover this story."
And he was like, "I'm really busy. Will you send it to Karen?" And they were like, "Oh, no. We have a history you understand." And so for three years, they refused to talk to me. But then I ended up at the Wall Street Journal where if they felt a bit compelled because it was the journal to reopen the lines of communication. And so I started having, you know, a more dialogue with them. Every time I wrote a piece, I would always send them, "Here's my request for comment. I would always ask them like, "Well, you sit for interviews."
We did get to a more productive relationship.
And I told them right away, "I'm working on this book. I want to continue this productive conversation where I make sure I reflect Open AI's perspective in the book."
“And so they were like, "We can arrange interviews for you. You can come back to the office. We'll set up some conversations."”
And then as we were going back and forth on this, the board fires them all, man. And that's when things started going kind of south because the company started becoming very sensitive to scrutiny. And so then they started pushing, kicking the hand on the road, on the road, on the road. And I kept saying, "Hey, when are we rescheduling this, what's going on?" And then I get an email saying, "We are not going to participate at all. You are not coming to the office. You're not doing interviews." And I actually already booked my tickets, so I was already going to fly to San Francisco to have the interviews.
And so then I told them, I was like, "That's fine. I will still engage in the process. Well, give you extensive requests for comment. I'll ask through my reporting. I'll keep you updated on all the things that I'm finding so that you can choose to still comment." I gave them 40 pages of requests for comment. And I gave them a month to respond to all of that. So this was when the tweet came out, was we were doing all this back and forth, trying to...
And that's when Alman tweeted this. And they never responded to a single one of the 40 pages.
“San Juan went to a lot of interviews. Yeah. You know, staying a lot of interviews all the time. He's done every podcast. I've seen him on everything from Tucker Carlson, I think he's done Theo von Gerogen.”
It podcasts all over the world. I wonder why he won't do mine. Well, maybe... I don't know why. I think I'm fat with everyone. I just asked questions I genuinely care about. I don't come in with huge preconceptions, ready to meet people for the first time. But I've heard through the great fine that he doesn't want to do mine. I mean, going back to what you were saying earlier that with this, the way that opening eye in these companies control research, you asked, "Do they also do this with journalists?"
I mean, yes, the answer is yes. And apparently they also do it with anyone who has, you know, a broad mass communications platform.
It's not just about the conversation that you're going to have with them. It's about who you also choose to platform. And there's this huge problem in technology journalism, where companies know that a really big carrot that they can give to technology journalists is access. And they will withhold that access at the drop of the hat if they catch wins that you're speaking to someone that they didn't want you to speak to. This is so true, and I don't think the average person really truly understands this.
So this kind of sounds like theory as you say it, but I'm not going to name names here because I don't think it's important. A particular person in AI who's team have basically dangled the carrot of them coming here for like 18 months. And I'm like, you don't have to thank the carrot. I'm going to speak to whoever I want to regardless of the carrot or not. And when this person comes if they want to come, I'll give them a fair shot. I'll ask them all genuinely curious questions about what they're doing their incentives. I won't gotcha them.
I don't have a history of ever gotchering anybody. Even if I have a different opinion, I'll ask the question. But they dangle carrots and they say, well, if you know, he's thinking about it, let's think about a day and what they're what the strategy is. I don't think they think those people don't understand is if we just dangle it for long enough, then they will
“perform in the way that we want them to do and they'll be, they'll be pleasant about us. They won't be critical. They won't give a give a warm our critics. I think a lot of their game is just dangle the carrot forever.”
That's like the optimal outcome. If we just dangle it, if we just tell them, yeah, I know what you're trying to look at the schedule, it just doesn't work. I think in the modern world, you just have to go there and give your opinion and allow the clash of ideas in the public forum. Let the viewers on decide for themselves. What they think. But this is a, yeah, this is such a huge part of their machinery is the way that they use these tactics to massage the public image of these companies and make sure that information that they don't want out and even opinions that they don't want out there go out there.
And so this is, this is, you know, I feel very lucky now that opening I shut the door early on me. At the time, I didn't feel lucky. I felt like I'd screwed myself over.
I was like, should I have been nicer to them in the profile so that I could m...
And in that moment, I was like relatively junior in my career. I was like, did I misunderstand what journalism about is about like, should I have actually been playing the access game?
But it was too late. I have the door shut to me. And so I had to build my career understanding that the door, the front door was never going to be open.
And that actually really strengthened my own ability to just tell like it is like active. Yeah. And just report what I see are the facts being presented to me, irrespective of whether the company likes it or not. And most of the company really does not like it. But I can continue to do the work. They don't need to open the front door for me. I was still able to do more than 300 interviews.
“So Sam Altman gets kicked off the open AI executive team. Did you find out why that happened?”
Yeah. There's a scene by scene recounting.
From who? I can't remember the exact number of sources. So I don't want to miss quote myself. But it was around six or seven people that were directly involved or had spoken to people directly involved in the decision-making process. So.
“I see these serious concerns about the way that Altman's behavior is leading to.”
Bad research outcomes and poor decision-making at the company. He then approaches a board member, Helen Toner. He kind of does a bit of a sounding board thing to Helen just because Ellie is freaking out. He's like, he's been like sitting on this. These these concerns for a while and he's like, if I tell this to someone. This could also be really bad for me if Altman finds out.
And so he asks for a meeting with Toner and in that first meeting, he's like.
“Like he barely says a thing. He's just like dancing around trying to figure out, hey, is this someone that I can maybe trust to divulge more information?”
Toner's role in responsibilities opening I were. She was a board member and specifically an independent board member. So opening I when it was a nonprofit. The board was split between people who had a stake financial stake in the company and then people who were fully independent and this was meant to be a structure that would balance the decision-making. To be in the benefit of the public and just rather than to be in the benefit of the for profit and to see that opening I then created. And I was like, I'm not an independent board member. I was a approaching Toner as an independent board member to try and see whether or not she was potentially seeing or hearing the same things that she was about the effect that Altman was having on the company.
And then sets off a series of conversations. First between Ilya and Helen. And then between mirroredy and some of the board members, some mirroredy was at that point the chief technology officer of opening I where these two senior leaders, essentially through these conversations and through documentation that they're pulling together like email slack messages and so forth. Convade to the independent board members three independent board members we are very concerned about Altman's leadership like he is creating too much instability at the company and it is like he is the root of the problem it's not that they were trying to say to these independent board members like.
The problem will not be fixed unless Altman is removed because of the way that he's putting teams against each other and creating this environment where people are unable to trust each other anymore and they're competing rather than collaborating on what's supposed to be this really really important technology. So that's quite a vague term that could mean lots of things like instability could mean pushing people hard to work harder. What do you mean by instability in specific terms as you can possibly say them.
When tragedy became out in the world opening I was wholly unprepared. They didn't think that they were launching a gangbusters product. They thought they were releasing a research preview that would help them get the data fly will going collect a bunch of data from users that would then inform.
What they thought would be the gangbusters product which was a chatbot using ...
They were trying to also hire faster than any company in history to try and have more personnel there and they were then sometimes hiring people that they were like actually we made a mistake we shouldn't have hired you.
“So they were firing people left and right and people were just disappearing off of slack and that's how their colleagues would learn that they were no longer at the company.”
So it was yes like many fast growing companies a very chaotic environment and a particularly chaotic environment because it was extra fast like they had to accelerate more than any other startup.
But on top of that miramurati and illizats get refelt that oldman was making it worse like he was not actually effectively ameliorating the circumstances of the chaos he was actually so in more chaos getting these teams to be more divided. It's important to understand that the executives and the independent board members they're all operating under this idea that they're building a GI and that a GI could either be devastating or utopic to humanity. It's not yes it's like any other company and no it's not like any other company you cannot have like in their view you cannot have this degree of chaos as the pressure cooker for creating a technology that they in their conception could make or break the world.
But that is basically what the independent board members also begin to reflect on they have these conversations amongst themselves where they're like.
Well based on what we're hearing about Altman's behavior like if this was an instacart without warrant firing him and they concluded maybe not.
“But this is not instacart and that's why they were like well crap maybe this is actually this does rise to the to the bar where we should consider.”
Replacing him because we are ultimately building a technology that we think could have transformative impacts either in the positive or negative direction. And so that is what happens. It's like these two executives and then the independent board members also they were hearing other feedback as well from their connections within the company with other people in the industry.
At one point out of the angel who is one of the independent board members and the CEO of Cora which is you know start a text start up in the valley.
“He is out of party in San Francisco and he starts to hear some of these rumors that there's something weird about the way that open AI has structured.”
It's open AI startup fund which was this fund that they the company had created to start investing in other startups. And he realizes they never really seen documentation about how the startup fund had been set up from all men and finally they get the documents and it turns out that open AI startup fund is not open AI startup fund. It's all men's startup fund and this was something like one of several experiences that independent board members were also having where they're like. There's something not right about the fact that there continuously are inconsistencies in consistency is between the way that Altman is portraying.
What is being done versus what is actually being done and so when these two executives approach the board or the independent board members then they're like okay this lines up with also the experiences that we've been having. And at that point they then have this series of very intense discussions where they're meeting almost every day talking about should we actually really consider removing. Altman and in the end they conclude yes we should and if we're going to do it we need to do it quickly because they were very.
And so they're concerned that the moment that Altman found out his persuasive abilities would make it impossible to do. And so they end up firing Altman without telling anyone you know they don't talk to any stakeholders to get them on the same page Microsoft gets a call right before they execute the action saying we're going to fire Altman. Lead investor and open it at the time yes one of the only investors in open AI at the time and that is what then develops the whole thing because every single person that is affected by this decision is now extremely angry that they were not involved.
That is what then creates this campaign to bring Altman back and then Altman ...
This company that I've just invested in is growing like crazy.
“I want to be the one to tell you about it because I think it's going to create such a huge productivity advantage for you.”
We're supposed to close an app that you can get on your computer and on your phone on all your devices and it allows you to speak to your technology. So instead of me writing at an email I click one button on my phone and I can just speak the email into existence and it uses AI to clean up what I was saying. And then what I'm done I just hit this one button here and the whole email is written for me and it's saving me so much time in a day because whisper learns how I write. So on WhatsApp it knows how I am a little bit more casual on email a little bit more professional.
And also there's this really interesting thing they've just done I can create little phrases to automatically do the work for me. I can just say jacksling din and a copy's jacksling tin profile for me because it knows who jack is in my life. This is saving the huge amount of time this company is growing like absolutely crazy and this is why I invested in the business and why they know our sponsor of this show. And whisperflow is frankly becoming the worst kept secret in business productivity of entrepreneurship.
Check it out now at whisperflow. Spot W-I-S-P-R-F-L-O-W dot AI slash demon. It will be a game change if you make sure you keep what I'm about to say to yourself. I'm inviting 10,000 of you to come even deeper into the diavacy act. Welcome to my inner circle. This is a brand new private community that I'm launching to the world.
We have so many incredible things that happen that you are never shown.
We have the briefs that are on my pad when I'm recording the conversation. We have clips we've never released. We have behind the scenes conversations with the guests and also the episodes that we've never ever released. And so much more. In the circle you'll have direct access to me, you can tell us what you want this show to be.
Who you want us to interview and the types of conversations you would love us to have. But remember for now we're only inviting the first 10,000 people that joined before it closes.
“So if you want to join our private close community head to the link in the description below or go to d-o-a-c-circle.com.”
I will speak to you then. How does SEO of a major company get fired by the board? Because board members, there's a quote in your book on page 357 where you say about Ilya saying, "I don't think Sam is the guy who should have the finger on the button for AGI." Now I ask myself this question.
You know, I work with lots of people here. We have 150 people that work in this business. And those people know me best. Yeah. They see me on camera.
They see me off camera.
“But if they said that we don't think Stephen is the right person to host the diary for example,”
it would take a lot for them to say that. They must have seen some shit off camera. Then to go, we don't think he's the right person to be on camera. Or for whatever reason. In the case of AGI which is much more consequential than a podcast that is filmed in my kitchen,
it almost sends a chill down one's body to think that the co-founder of a business has gone to the board and said, this isn't the guy to lead this conversation. It wasn't just Ilya, Miram Roddy, then also said, "I don't think Altman is the right guy." And then they both left later.
So then Altman comes back and low and behold, Ilya never comes back.
So his concerns about the fact that Altman finding out would be bad for him manifested. He ended up not coming back and Miram Roddy then left shortly thereafter. Quite a lot of these people leave, don't they? Open AI. They do. So if you consider one of the origin stories of Open AI is this dinner that happened at the Rosewood Hotel, which is a very swanky hotel, right in the heart of Silicon Valley,
that was one of Elon Musk's favorites whenever he was coming up from LA to the Bay Area. And there was this dinner that was there where Altman was intending to recruit the OG team that would start Open AI. So he's kind of telling everyone you might have a chance to meet Musk because Musk is going to come to this dinner. And he called E-Mails Ilya and gets Ilya to then come because and Ilya specifically wants to come because he wants to meet Musk. And he also emails all these other people, including Greg Brockman, Darya Umadei, and they all almost all of them.
Not every one of them, but almost all of them end up working at Open AI. And leaving almost all of them end up leaving specifically after they clash with Altman. And Ilya, he left and launched a company called Safe Super Intelligence. Which is, I mean, that's an indirect, if I've ever heard one. Do you know what I mean?
If someone like, I'll co-founded this podcast with me and then they left and it started a podcast called Safe podcasting.
I'll take that as a slight.
I'd have people knocking on their door. One of the things that is happening here is. It is not a coincidence that every single tech billionaire has their own AI company. They want to create AI in their own image.
“And that's why they keep not getting along.”
And in fact, it's not just don't get along. They end up hating each other after working together. And then splinter off into their own organizations. So after Musk leaves, he starts XAI after Dario leaves. He starts Amthropic after Ilya leaves.
He starts Safe Super Intelligence after Miro leaves. She starts thinking machines lab. They want to have control over their own vision of this technology. And the best way that they have. They're derived from their experiences of trying to put their vision into the arena is by creating a competitor.
And then competing with OpenAI and with all the other companies out there.
“Do you think some of these AI CEOs realize that they are quite literally summoning the demon as Elon said 10 years ago?”
But they don't really care because being the person that summoned the demon is.
Makes you consequential and powerful and historical even if the outcome is potentially horrific.
Even if there's like a 20% outcome of it being horrific. I remember I think it was Dario. He's the one that said that somewhere between a 10% and 25% chance of things going catastrophically wrong on the scale of human civilization. 25% is a one in four chance. If you put bullets in a four chamber of Oliver and said Stephen,
the upside is you could become a multigazilinear and be remembered forever. The downside is that there is no chance that I would take that bear with a 25% potential chance of things going catastrophically wrong. So I have a very long answer to this because do they know if they're summoning the demon it really depends on what we define as summoning the demon. And in this particular case to go back to what we were saying before, there is a mythology that the AI industry uses where summoning the demon is an integral part of convincing everyone that therefore they can be the only ones that are developing this technology.
I got it. So on one end you've got to say if we don't try to will and that's terrible. Yeah. But if we let anyone else do it other than me, then we're fucked as well. Exactly.
“So that means that I have to do it and you have to give me money and so.”
Exactly. So when they're saying these things, we should understand it as not as like a genuine prediction based on what they're seeing because first of all we don't predict the future. We make it. We should understand this as an act of speech to persuade other people into believing that they should see more power more resources to these individuals. And so do they know that they're summoning the demon? I mean, they are purposely trying to create this this.
Feeling within the public that they are because it is a crucial part of their power.
But do they if we were to define? Just do they realize that the things that they're doing are having already really harmful impacts all around the world on vulnerable people, vulnerable communities, vulnerable countries. That's where I'm like, maybe yes, maybe no. And they don't really care because in the frame of mind, like, I sometimes use the analogy that the AI world is like, dude, dude, for anyone that doesn't know, dude.
Science fiction epic written by Frank Herbert. And it's set in this intergalactic era where there are all these houses and they're fighting each other for spice. So it's a call back to colonialism and empire. And they all are trying to control the spice. But one of the features of this story is that there are these myths that are seated on the different planets.
About a religious myth basically about the coming of the Messiah that are used as ways to control the people.
And Paula Trades, when he arrives at the planet Iraqis with with the intention of trying to then fight against the empire and avenge his father's death, he steps into a myth that has been seated on this planet. And that says that one day there will be a Messiah that comes and saves the planet.
He steps into the role of the Messiah and leans into this idea in order to be...
He knows that it's a myth in the beginning, but because he lives and breathes and embodies it,
it kind of starts to blur in his mind whether this is really a myth or whether he's really the Messiah.
“And this is what I think happens in the AI world.”
On one hand, there are all these executives that actively engage in myth making because, you know, I have all these internal documents that I write about in the book, where they're very keenly aware of how to bring the public along with them by showing them dazzling demonstrations of the technology, by using crafting a mission that will sound really good and make people give more leniency to their companies. So they know they're doing the myth making. But also, I think many of them lose themselves in the myth because they have to live and breathe and embody it day in and day out.
And so when, you know, Dario says he thinks that 10 to 25 percent of the future could be catastrophic or whatever the probability is 10 to 25 percent.
He is actively engaging in the myth making, but also he's losing himself in the myth. I think if you were to ask him, do you genuinely believe that he would be like, yes, I genuinely believe that because there's been a blurring of when he's saying something just to say something versus when he actually believes what is he's required to believe in order to then continue doing the things that he's doing. Is the whole psychology of cognitive dissonance right where you, the brain struggles to hold to conflicting wild views at the same time.
So it's incentivized or it endeavors to dismiss one. So if you, you know, if you wanted to be a healthy person, but also a smoker. And I pointed out smoking's bad for you. The first words out of your mouth are going to be yes, but. Yeah, it helps me with stress. Yes, but I only do it when. I don't know, I, I kind of see that at the moment because these companies have to raise extortion at like huge amounts of money to fund their AR research and their building out all of these data centers.
So when they're out in the public, they're always fundraising all of these major companies are fundraising all the time at the moment.
So you can't be fundraising and saying, I'm going to destroy your children's future potentially there's 25 percent chance that your children aren't going to have a great life. Which might be the truth. I mean, that is actually what they say, direct, this is what famously Dario Armedie does. He says that. The other's time not doing that as much anymore. Yes, and it's because, you know, it goes back to like each of them kind of distinguish themselves a little bit as as.
The brand that they need to project. Do you think any of them are more.
“Have a stronger moral compass than others because I think Dario often gets the credit for having more of a.”
You know, or for backbone and being more conscious of implications. He does get a lot of credit for that. He's from Claude and anthropic for anyone that doesn't like. I don't think it truly matters that question. The answer to that question because to me, even if you were to swap all the CEOs for someone that people would say is better at running these companies.
It doesn't fix the problem that I identify in the book, which is that there is a system of power that has been constructed. Where these companies and the people running these companies get to make decisions that affect billions of people's lives around the world. And those billions of people do not get any say in how it goes. Those people, they can go to the polls, right? So if the public are sufficiently educated, they can go to the polls and pick a leader.
That says they're going to legislate or pass laws or train pass laws. Yes, but at the speed and pace at which these companies operate and at the sheer scale and size, they're able to also spend extraordinary amounts of money. Hundreds of millions in this upcoming midterms to try and kill every possible piece of legislation that gets in their way and craft legislation that would codify their advantage.
“And so to me, I think sometimes as a society, we obsess a little bit with.”
Are these leaders good or bad people? And to me, the bigger question is, is the governance structure that we've created a sound one or that allows broad participation or an anti-democratic one. That has consolidated this decision making power in the hands of the few because no person is perfect. I don't care who is on the top of these companies. They're not going to have the ability to make decisions on behalf of so many people around the world who live and talk and have culture and history that are fundamentally different from them without things going wrong.
So that is why throughout history, we've moved from empires to democracy.
I'm going to try and take on that point of view. So this is me playing doubles advocate. Okay.
But Karen, if the US don't continue to accelerate their research with AI at some point, China's model is going to become so smart and intelligent that we're basically going to have to rent it off them and we're going to be, you know, they'll get the scientific discovery is they'll discover the new era of autonomous weapons and we will be their backyard.
“And like logically, that argument does appear to be pretty true.”
No, it's not. If we scale up, if we just imagine any rate of change with this intelligence at some point, we're going to come to a weapon that could theoretically disable. All of the United States electricity, their weapons systems, it would know exactly how to disable the United States from a cyber perspective because it would be that smart.
Or you've got to imagine is any rate of improvement of any period or sort of a long period of time. So this is a theory that might be true. And if it's true.
I mean, yeah, any theory might be true. But if, but you know, again, at going to this point of like even if it's a small percentage, it's worth paying attention to on the other side of the foot.
“This is a theory that people talk about. It could be the case that the most intelligent civilization is going to be the superior civilization.”
Logically, that's a pretty sound thing to say. So there's a lot of a lot of fundamentals in this argument that would need to be true in order for this to be a viable argument.
And let's knock them down one by one. So the first one is that these systems are intelligent.
And that just scaling them is going to bring us more intelligence. So far, so true. No, it's actually not because first of all, again, we don't actually know if these systems are like intelligence is not it's not like the right analogy almost it's sort of like. It's like is a calculator, a calculator can do math problems faster than a human does that make it intelligent. It hasn't narrow intelligence because it's solving a narrow problem, which is like one plus one equals two, but and these systems they actually also are quite narrowly intelligent.
And the sense that even though these companies say that they're everything machines, I can do anything for anyone, they actually can only do something for some people. This is like the jagged frontier of these air models, like some of the capabilities are quite good, other capabilities do not that good, you know why that happens is because the company can only focus on advancing certain types of capabilities. I can't literally focus on advancing all types of capabilities. They have to actually set their mind to advancing a certain by gathering the data that is needed for that capability by taking, you know, getting a bunch of human contractors to annotate and train the model to do that exact thing.
And so, scaling these models is actually a perpendicular question to are we actually getting more cyber capability specifically and more military capabilities specifically. I would argue that most of the most of the top people in AI believe that the intelligence is going to continue to scale for some time. I love them do, like Jeffrey Hinton does. And again, it's back to his hypothesis about how human intelligence works and what the appropriate model of the brain is, his hypothesis throughout his career has been the brain is statistical engine.
But that's his hypothesis and that is not universally green upon, especially among people that are not in the AI world. When you talk with neuroscientists and psychologists, people who actually study human intelligence in the human brain, that is where you start to get a lot of debate and disagreement about this particular view that Hinton has. So, this is kind of like one of the things is like AI is already being used in the military and has been used in the military for a long time. But specifically accelerating large language models isn't just the only path for getting military capabilities.
“Because we have to choose to specifically pick military capabilities to accelerate, not just like Shioneral and tell it. It's like, you know what I'm saying?”
They create this myth that they are actually pushing the frontier of all of the capabilities of the model. But that's not what's actually happening internally. And I had hundreds of pages of documents on like how they were specifically training models. What capabilities they want to advance and you know how they pick them? It's based on which industries would be able to pay them the most money for their services.
They pick finance, law, medicine, healthcare, commerce.
I think I have dragon intelligence. I wasn't going to say it. But I think I know a little bit about, no, I know a lot about a little bit.
“Yeah, but you also have the capability to learn and apply your knowledge by yourself and you also have the ability to choose what you're going to learn and acquire by yourself.”
It's not easy and it takes a lot more time than these models it seems less compute. And you can learn how to drive in one place and then immediately know how to drive in another place. These models cannot do that. Every time a self-driving car is shifted to another location.
It has to completely retrain on that location. It's like all the self-driving cars. I mean we're sitting in Austin right now and there's all these self-driving cars that are driving through Austin.
One of the lines they all learn, which is, well, it's just because it's an operating system that has an air model as part of it and you're training the air model and then you deploy the air model across all the self-driving cars. Which is a big advantage because if one optimist robot learns one thing in one factory, they all learn it and imagine that imagine a few minutes if we all learn what all the other humans learned. That would be, that would give us such an unbelievable competitive advantage. I mean one of the ways we did that is through communication.
Or we could not because they could be learning the wrong thing which has also happened again and again with these technologies is that all of them that learn the wrong thing and they all have the same failure mode. I think sometimes we hold AI models to a higher standard than we hold humans too and in a weird way because I would hear on stage when we're in Austin at the moment and I'd hear people go, "Ah, but you know, AI models they hallucinate sometimes. I'm like, "Have you met a human? I hallucinate all the time. I can barely spell or do math."
Yes, but it's once again like using this analogy that was specifically picked in the early days of the field as a way to market these technologies.
Like we're repeatedly using the intelligence analogy and relating these machines to human intelligence as a way to try and gauge whether or not it is good or worthy or capable in society. The output is the thing that really matters the most consequential which is like, "Okay, it might have a different brain and a different system, but does it arrive at the same capability?"
“Like, does it is it able to do surgery on someone's brain? Is it able to drive a car like my car drives itself in Los Angeles?”
I can drive for many, many hours. In here in Austin, I just saw the ones the other day where they've removed the steering wheel and the pedals, they need cybercabs. So it doesn't really matter if it's using a different system. If it's navigating through the world as a car, it has a better safety record than human beings. Then, as far as I'm concerned, intelligence or not, it's like, yes, but that was not the original argument that you made which was like, these systems are just generally going to become more intelligent across different things based on the prediction. This is a prediction that you're making, right?
Like, that, and this is a prediction that all the AI... Illya's making, Darius making, all-on's making, Zuckerberg's making, all-ones making, Dennis is making, and you know what the common feature of all of them is? They profit enormously off of this myth. Elon has recently spearheaded the construction of Colossus, a massive supercomputer in Memphis, housing 100,000 GPUs, specifically to scale up their grow API models faster than their competitors. It appears that they've all converged around this idea that you can brute force your way to greater more generalized intelligence.
They've converged around the idea that you can brute force your way into models that they can sell to people for automating certain tasks that are financially lucrative. And I heard Elon say that if you're a surgeon out, there's just no point. He was like, don't train to be a surgeon. He says, in a couple years time, Optimus and AI generally are going to be better than any surgeon that's ever lived. Yeah, you don't think that things are true. Well, you know, I've pretty sure it was hinting that famously slash infamously said, there would be no need for radiologists anymore.
There would be no need for radiologists anymore. And he said a deadline that we've already passed. I don't remember how many years radiology is doing great as a profession. Do you think it will be in five years?
“Okay, so this once again goes back to this question of like, why do we build technology and why should we specifically be building AI?”
Okay, and for me, like the whole project of technology development advancement is not to advance technology for technology, say, it's to help people. And there've been lots of research that has shown that actually the best outcomes for people in a healthcare setting is for the radiologist to have the AI model in their hands.
For the human expert to use the AI model as a tool as an input into their jud...
And it is that combination that leads to the most accurate and early diagnoses of certain types of cancer that then help improve the prognosis of the patient.
“Do you believe that in the coming years, all the cars pretty much will be causing the radiology job in themselves?”
No. I don't think so. How come? Because of the way the technology works because because these are statistical, I mean, currently the way that AI models are primarily developed their statistical engines. You have what's called a neural network, which is a piece of software that has a bunch of densely connected nodes. What's parameters? Is this what they call parameters? Yeah, pretty much. And you're just pumping a bunch of data into it, and then it's analyzing the data and creating this all of these finding all these correlations in the data, finding all these patterns.
And then it's through those patterns that the machine is then able to act autonomously, right?
And so the way that there's it turning itself to our cars, they're recording all this footage, and then they have tens of thousands or hundreds of thousands of human contractors that draw literally around every single vehicle in the footage, every single pedestrian, every single traffic light, every single lane marking and label it exactly as such. So that then it's fed into an AI model that can identify all of these different components, and then it's connected to another piece of software that is not AI that's saying, okay, if you, if the AI model recognizes the pedestrian, we do not run over the pedestrian.
If the AI model recognizes a red traffic light, we stop, and so the thing about statistical engines is that is based on probabilities, it's not based on deterministic logic. So systems make errors all the time, and it's impossible, it is technically impossible to get them to stop making errors. So humans make errors way more than systems, in this case, like the safety record is like, isn't it like 10 times more safe to be driven in a Tesla with autonomous driving than it is for a human to drive in a normal place.
It depends on whether the Tesla was trained specifically navigate the place that you're doing. Because it's in Mumbai, in someplace in Vietnam, no, it would not be safer. I would much rather be driven by someone that has been driving in that place the whole life. I'm not doing against like the fact that in certain places where the car has been explicitly trained to drive in this place that it has a better safety record than the humans that are driving in that place.
“But you specifically asked if I think that all of the most cars in the world, in the US, in the United States, because we're here.”
I don't actually think that it's like, imminently on the horizon. Ten years? No, I don't think so.
I sat with Dara for a mover and he's pretty convinced that his 9 million carators will be replaced by autonomous vehicles.
I mean, how long have has self-driving cars been invested in Tesla? It's been more than ten years. And what percentage of cars right now are autonomous on the US roads? I mean, so part of it is it's actually not a technical problem. Part of it is also social problem. Like do people even trust getting into these vehicles? Part of it's also a legal problem, which is if the car, the self-driving car kills someone, which it has happened. Who is responsible? So in the case in LA, it was both Tesla and the driver, because the driver dropped their phone.
They looked down and this was a couple of years ago, I believe, and they went to grab their phone and they hit someone. And so it went to court and they were held both responsible, both the driver and Tesla. In terms of Tesla, pretty much everyone that gets the car, it comes with autonomy now for pretty much most people I believe. Personal autonomy? Yeah, it's called full self-driving in the moment where it's like-
I mean, yes, it is called full self-driving.
“Full self-driving supervised, where you kind of have to be looking in the direction. You have to be looking in the right direction, but-”
Yes, so it's a partial autonomy. And here in Austin, it's full autonomy, because there's no steering wheel on the new car. So you can't drive it anyway, but it is, you know, the model why is the undisputed high-selling car best selling car in the world across all brands? Well, I guess my point here is, like, these predictions where they say AI is going to completely change transportation and driving. It's going to completely change, lawyers aren't going to have jobs. Accountants aren't going to have jobs.
Do you believe that they are true? Do you believe that there's going to be mass job displacement? Okay, so I do think that there is going to be huge impacts on employment.
We are ready seeing those impacts.
It is not simply because the air models are just automating those jobs away.
It is specifically because the models are improving in certain capabilities based on what the companies that are developing them choose to improve them on. And executives at other companies are then deciding to fire or lay off their workers because they think that AI can replace the worker irrespective of whether that might be true. And there have been cases of, like, the clarinet CEO who laid off a bunch of people thinking they would replace everyone with AI. And then it didn't actually work in the ad to ask some people to come back.
I actually DMed him about this. If you're hearing this, this is because I've DMed Sebastian and he's fine with me sharing this. He said, because I've heard his name mentioned a lot. And so when we talked about AI and the past and people mentioned Sebastian and clarinet as the example I wanted to clarify with him what the truth was.
“He said, "It's great to hear from you. I think sometimes people struggle with two things can be true at the same time.”
I think it might be time to come back on your podcast." To your point, this is the medium misinterpreting my tweet. We are doubling down on AI more than ever. Clarinet is shrinking with almost 100 employees per month due to AI. We used to be 7,400 at the peak. A year ago, 5,500. Now, we're 3,300. And by the end of summer, this was last year, we'll be 3,000 people. AI handles 70% of our customer service conversations at this moment.
This is because we have realized that with AI, the production cost of software comes down to almost zero. Just like manufacturing used to be all handcrafted, and then the machines came, code used to be all handcrafted up until a few years ago, and now it is machine produced.
And ultimately, we pay people more than ever for the unique handcrafted man-made stuff.
Clarinet is a bank. People will want to connect to humans that only machines. They want us to be personable, relatable, even flawed, so we need to make sure while we are automating, replacing with AI in parallel, we make sure we offer a super-available human experience.
“I'm really good to read this because I think it touches on some really important nuances to the impact that AI is going to have on employment.”
So I think there's often these binary narratives. It's like AI is going to come for every job, or people say AI is not actually working, and it's not actually coming for jobs.
And the reality is it's coming for jobs. There are definitely jobs that are being automated away because of the capabilities of their models,
and there's also jobs that are being lost because executives are deciding to lay off the workers. Even if the models don't match the capabilities because it's good enough. They would rather have the good enough model for way cheaper. Or they made a mistake with hiring. They blow the team and it's a great convenience thing to say. There's many reasons, but clearly we're already seeing impacts on the job market. The U.S. jobs report that came out earlier this year showed that there has been a decline in hiring.
“Is it a slowdown in hiring across especially white-collar professional industries?”
There are anthropics reports in you this week. The TLDR is, it matches kind of what you were saying, where they anthropic looked exactly how people were using their models, and they looked at what people are saying. And they said that there's been a 40% reduction in entry-level jobs in particular, and then they made this graph, which has gone viral over the internet. The red shows where we are now in terms of capability, and based on how people are currently using the models, they extract it out that the blue part will be the disrupted parts.
They say AI can do right now, but people don't realise it yet. So if you look at it, it's like, it's kind of all the stuff you'd expect. It's the physical real-world human stuff, which robots maybe can do some day like construction or agriculture that are untouched. But like office in admin, like saying finance stuff, math. What is that? These are all the things that I just named that they purposely finance math law. They do focus a lot on like assistant type and managerial work.
But the other thing that the Karno CEO said was, but people also want human experiences. So it's not actually just about the capabilities of the models. It's also about what people want, like some things they would turn to AI for, and some things they wouldn't irrespective of whether or not AI is capable of doing it. But because of a preference that they want human to human interaction. And so what we're seeing right now is, yeah, the thing that happens with every wave of automation, which is that there is a bunch of entry-level work that gets automated away.
There are also new jobs created, but the jobs that are created are in one of ...
There are people that get even higher skilled jobs and what he was saying, like we pay people more for like the handcrafted code now. And there's also the people who get way worse jobs.
And so there was this amazing article in New York Magazine that was talking about how a lot of people are getting laid off.
And then they end up working in data annotation, which is the labor that I've been referring to throughout this conversation, that companies need in order to teach their models the next thing that the companies are trying to automate. And so like a marketer gets laid off and then they go and work for a data annotation firm to train the models on the very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill. And the article was talking about how this has become a huge catch-all for a lot of people that are struggling with finding job opportunities right now, including like award winning directors and Hollywood that are actually secretly doing this data annotation work to put food on the table.
“And so when they talk about, there's going to be a mass unemployment and then there's going to be some new jobs created that we can't even imagine. I think a lot of these narratives rarely talk about like first of all, why are some jobs going away?”
It's not just because of the model capabilities, also because of executive choices and because of the rhetoric that they use if they want to just downsize.
But the other thing that is rarely talked about is the jobs a lot of the jobs that are created are way worse than the jobs that were there. And it breaks the career ladder. So it's the entry level and the mid tier jobs like a gouged out. It's higher order jobs and then way more lower order jobs that get created. And so how do people continue to progress in their careers? There's no more rungs on the ladder. I actually don't know the answer to this question and I've been furiously trying to find a good answer to this question because I can't, you know, everything is theory and for my audience, I would say most of my audience don't run businesses.
A lot of them do a lot of them spy to but they don't run businesses. So they're kind of, they're also in the land of theory. They're hearing lots of different things. Jack Dorsey does this to each saying his half in his head count because of AI. They don't know what's true. They don't know this sort of internal economics at Jack's company and did he blow the company during the pandemic and he's just using this as an excuse to make this share price spike for seven points.
“Because his investors now think they're an AI company, whatever. It's hard to pass through. So eventually I go, okay, what am I doing?”
I have hundreds of hundreds of team members probably 70 companies I invested in maybe five or six that I'm like the lead shareholder in. What am I actually doing on a day-to-day basis right now? I'm also, I also considered myself the head of recruitment. But in the last month in particular, I have met extremely capable candidates in terms of cultural alignment, hard work, those kinds of things. But I've had to take a great deal of pause because when I run the experiment of can I get an AI agent to do that exact same thing?
The answer is increasingly us, especially in a world of open-class.
And so what I'm curious like, yeah, now you confront the decision where you're seeing in this short-term period you could just choose the AI agent.
“And in the long-term period, there is no career ladder, so who are you promoting into these senior roles?”
Like, how do you resolve it for your own company? Yeah, it's good question. So it's kind of two ways I'm thinking about it. I think really deep expertise is very, very valuable. Because if you're now the orchestrator of potentially AI agents, it's really about having a deep understanding of the right question to ask. And that's someone who has deep expertise on something, so I need by CFO because if she's going to be orchestrating our team of agents that might be doing financial analysis or whatever else,
she needs to understand what to tell them to do in our company. And in turn, financial analysts can't do that. They need the 50 odd years of experience that, you know, Claire has. On the other end, I need CAS, CAS is 25, CAS knows everything about AI agents. He's a young Japanese kid who's highly, highly curious, you know, on the weekend he's building AI agents to solve problems in my life. I need those two kinds of thinking, which is highly proficient agent maxing young kids, or they don't necessarily need to be young, but like really lean in high curiosity.
That's creating a four-small to plan on business and then any deep expertise. Now, everything else outside of, there is another one I've thought of, another group is like people with extremely gray, IRL people skills. Because we do meet people in real life, we greet you and you arrive here, we greet, when we go for lunch with big clients that we have, whether it's Apple or LinkedIn or whoever it might be, we, you know, we need to smooth. And we have teams who, you know, are in person in the office, so we do a lot of stuff IRL and increasingly we're building communities, even for this show, we're doing community events all around the world.
So, many people that are good at that as well.
IRL, bringing people together in real life and organizing stuff. Those are the three groups of people that I'm like, you know, it replaceable right now.
“And if you were to, all of the roles that could be done by AI agents, if we were to place them with AI agents, do you think you would still have these three roles?”
Pools of people to hire and promote into the three critical things that you need in the long term?
If things carry on at the current rate of trajectory, one could assert that even those roles would experience pressure. If you just imagine, like people think of things either statically or linearly or exponentially, you imagine an exponential rate of improvement, which is kind of what I've seen, even like a 10% compounding rate of improvement at some point. At some point, I think what remains is actually the IRL irreplaceably human stuff, human to human. Our Maslovi needs of being in person like we are now aren't going to change.
We need connection, humans get very sick when they don't have other human beings in their life and strong relationships. So, that stuff is going to matter a whole lot. I have this contrarian, weird take, but actually maybe this is the first technology that's going to deliver on the promise of making us human and connected. Because we're going to be rendered useless at everything else other than what humans are good at. Because all the other technology said, "Oh, we're going to make you more connected, connecting the world."
And they disconnected the world the most later the world. But maybe this is the one it's so intelligent now that it doesn't need us to fuck around and spreadsheet anymore. Do you see that actually happening in real time right now that it's making us more able to be in person connected with one another, having deeper social community engagements? Yes. Yeah.
I'll give you some data points. Okay. They'll point number one. The financial times are at risk to report on social media usage. And what they saw is 2022 is the peak and it's plateaued ever since.
The generation that's plateaued the fastest and heading down is the young generations. Yes. The boomers are still off to the races, right? On Facebook and stuff. And then you look at the way general for a using social media, they're not posting as much.
They call it a posting zero. They're scrolling sometimes, but they're in dark social environments like WhatsApp and Snapchat and eye message. They're not like performing to the world. They also value viral experiences much more than any other generation. They're not getting smashed.
We're seeing every brand has a run club. Yeah. We're seeing, I mean, run clubs exploding around the world. And we're seeing this real sort of sort of almost like innate realization that technology led us down at some fundamental level. Like dating apps led us down, social networking kind of has led us down.
“I think maybe a biification of society where a lot of people are going to fuck this.”
I want to go back to what it is to be a human. Yeah. And I would imagine that in such a world where intelligence is so sophisticated that we no longer needed to sit at laptops. And I think screen time is going to continue to fall. I think you're going to an office.
You're not going to see people sit at laptops. You're going to see something completely different. And I think maybe, you know, and then we talk about robots and optimists robots. Elon says they'll be 10 billion optimists robots. Elon has been wrong with timing before.
He's almost never been wrong on the big things completely.
He's just his timing has got a bad track record. So I think he's probably right. You know, I think I've got some people on the way from Boston Dynamics and these other big companies like scale I are. And they're actually bringing the robots here to show it like folding laundry doing the dishes.
“I'm not saying that's what I would want in my home.”
But I think factory work is going to completely change. I think a lot of manual labor is going to completely change. And I think we're going to be forced to do what only we can do. Sebastian, who's the CEO of Clarner, has actually just called me. Hello, Sebastian, you're all right.
I'm good, how are you? It's been a while. It has been a while since you're on the show. I was just saying we do need to get you back on. I just had a couple of simple questions because, you know, I do a lot of interviews.
Clarner's always mentioned because I think the media has said that you like double down on air then you reversed because it didn't work out.
So I know I spoke to you a while ago and we exchanged a couple of DMs about it. But that was more than he was almost a year ago now. So I just wanted to get an update on Clarner's business AI agents and all of that if possible. First of all, most we were early on released AI to support our customer service. We had that initial benefit of more calls being dealt with by AI which customers liked because those calls or chat messages were much, much faster and more quality.
Then since then, that has actually expanded slightly. What we did however try to communicate as well is that we believed in the world of where AI is cheap and available. The value of human interaction will be regarded as higher. So the future of customer service VIP is a human. We have been hence doubled down on providing more bad.
But at the same time, the efficiency gains within the company has continued.
I mean, we used to be about 6,000 people.
And now we are less than 3,000, which is 2,3 years since we stopped recruiting.
“And at the same time our revenue has doubled, right?”
So you can clearly see that AI has allowed us to be more with less people. But we have avoided layoffs instead relied on natural attrition when people kind of move on to other jobs. I mean, from my perspective, we will continue to be very, you know, not really recruit much. I mean, we recruit a little bit here and there. But we expect that kind of natural attrition of 10, 15% per year to continue on to become fewer.
I think the big breakthrough was really in November, December last year.
We're even the kind of more most skeptical engineers who are very well renowned and appreciate it. Like the founder of Linux and stuff like that basically said that coding has now been resolved. And hence is not, you know, you don't need to code anymore. And that was kind of a common sentiment. So I think in coding that's definitely an engineering work that has been a tremendous shift.
In the last six months, what do all these people go do Sebastian? I am optimistic. I mean, I think obviously people will have a lot of opinions about this topic, but I still believe that we are going to move towards a richest society. Now in the short term, there could be more worry about what happens if people don't get a job and so forth.
“But I think in the longer term, I am optimistic what it means for society and humanity.”
Thank you so much, Seb. I'll chat to you soon. Thank you for taking the time. I appreciate you, mate. Thanks. Bye bye. I've spent the last decade building and investing it in companies.
And so often the conversation around marketing budgets follows the exact same pattern. The budget gets approved, but then the results don't come back. And most of the time, the creative pitch and the offer is fine.
The problem lies with the audience, adds reach people who will never buy or refer.
Nor do they have the power to sign off anything at all. And this is why so much budget gets wasted. LinkedIn adds to a response to this podcast. Let's you reach them specifically by job title, seniority, company size, industry.
The skills that they have and much more. You're no longer hoping your ad reaches the right person. Instead, you're defining exactly who sees it. And LinkedIn adds drives the highest B to B return on ad spend across all major ad networks. Give them a try at LinkedIn.com/diri.
And if you spend $250 on your first campaign, you'll get a $250 credit for your next one. Just by going to LinkedIn.com/diri. Keep this to yourself, terms and conditions apply. You know, the little traditional SIM card that goes inside of our firms.
They haven't changed at all since they were invented in the 90s. You have this physical piece of plastic. That means you're locked into one carrier, one network, and the second you cross a border that carrier can start charging you, whatever they want. But there are alternatives.
And today's sponsor, Saley, is one of them. It's an eSIM app that gives you a safe and secure data connection in over 200 destinations. All of their eSIMs have built inside of security, which is great if you're traveling for work and looking at confidential material.
I've been using Saley whenever I travel because the connection is always reliable.
And it saves me a ton of roaming fees. It also means I don't have to deal with all of the FAF that's around sorting out a SIM everywhere I go.
“If you want to give it a try, download the Saley app from the app store now”
and scan the QR code on screen. And if you want 15% off your first purchase, use my code D-O-A-C when you get to check out that's D-O-A-C for 15% off. Okay, pet yourself. Any thoughts?
Well, I actually had thoughts on something that you said before he called, which is you were saying that the Gen Z years, like there's these trends that they're actually disconnecting from technology so they're becoming more in person. And then there's this other class of workers that are actually leaning into the technology,
then becoming more human because they're leaning into the technology. Because they're realizing that they should actually just be spending more time doing in person to person interactions rather than sharing an issue. And so they're no longer doing the typing and whatever. I really want to go back to this New York Magazine piece that just came out
because what you're describing is true for a very specific category of people, which is often like the business owners and the leadership within companies that actually can make these decisions on how they spend their time and what they ultimately do with their time. But what the piece talks about is the working class.
Like people who are not business owners that are then having to experience being laid off and then working for the data annotation industry. Which is now one of the top jobs on LinkedIn, by the way. Yeah, so LinkedIn had a report that showed the top 10 jobs with the highest growth
In the last year and data annotation is on that list.
And for anyone who doesn't know what data annotation is. Yeah, so data annotation is the process of teaching these chat bots
or any AI system to do what they ultimately are able to do.
So the fact that chatGBT can chat is because there were tens of thousands or hundreds of thousands of people that were literally typing into a large language model and showing it, this is how you're supposed to then respond when a user types in a prompt like this. Before they did that work, chatGBT didn't exist.
Like it would just, you would prompt the model in the model would generate some text that was not in dialogue with the person. It would kind of generate something that was adjacently related.
“Is this what they call reinforcement learning, where you give it like that?”
It's a part of the process of reinforcement learning.
So you do data annotation, which is literally showing lots of different examples
of things that you want the model to know. And then reinforcement learning is getting the model to then train on those examples iteratively in a way that then gives the model some of those capabilities. And what the New York Magazine piece highlighted is many, many of the people that are getting laid off now or are struggling to find work.
And these are highly educated people. They're college graduates, PhD graduates, law degree graduates, doctors, and again like award winning directors that are then struggling to find employment in the economy because the economy has been very much restructured by AI. They are then finding themselves being serving this industry.
And the industry is designed in a way that is extremely inhumane.
“Because what the companies, the companies that use these data annotation services,”
like there's these third party providers that are data annotation firms,
an open AI, a rock, a Google, they will hire these firms to then find the workers to perform the data annotation tasks that they need. For these firms, these third party firms, they are incentivized to put workers against each other because they want this data annotation to happen at speed and as cheaply as possible so that they can also compete with one another
in this middle layer to get the contract from the client. And so all of these workers that were interviewed for this New York Magazine story, talk about how they actually no longer have an ability to be human. Because they are waiting at their laptop to be pinged on slack for when a project is going to open up for data annotation because they've tried job hunting.
They literally can't find anything else. This is the thing that's going to help them put food on the table for their kids. And there was this one woman who said, like, I have so much anxiety about when the project is going to come, when it's going to leave, that when the project came, it was right when my kid was coming off of school.
And I just started tasking furiously because I don't know what's going to go. And I need to earn as much money as possible in this window of opportunity. So then when my kid came home and tried to talk to me, I screamed at my child for distracting me. And then she was like, I've become a monster.
And I am not even allowed to go to the bathroom or take care of my kids, let alone myself because this industry that is absorbing more and more of the workers that are being laid off is mechanizing my life, atomizing my work, devaluing my expertise, and then harvesting it for the perpetuation of this machine
that all of these AI executives are saying is then going to come for everyone else's jobs. And so what you were saying about these, this class of workers, the business owners that get to become more human because they're all of these AI models now doing the task that they don't have to do anymore. It is at the cost of the vast majority of people who are not business owners
that are struggling to find work, getting absorbed into the work of then providing these technologies that the business owners can use. And instead of becoming more human, they feel like their humanity has been squeezed and diminished and they have no ability to have control agency and dignity in their lives anymore.
“I think this is a big question, that kind of pertains to this graph here,”
which is, you know, all of these people, if we believe anthropics prediction of who will be disrupted, these people in these industries like arts, media, legal, life and social sciences, architecture and engineering, computer and maths, business and finance, and management and also office and admin.
These people, if we believe this, would have to retain something else.
Unlike the Industrial Revolution where you might get 10, 20 years to retrain
because factories take a long time to build, the distribution layer that AI sits on top of is the open internet. So this is where I chat to be able to pop and get hundreds of millions of users in no time at all and become the fastest grain company of all time. One of my fears is that this disruption takes place at a speed where we can't transition.
“And that was, you know, that, I think you set the sentence in the passive voice.”
The transition would happen at a speed, but who is driving that speed? It's the company and their race with one another. Yeah. And so they are driving the transition to happen at a speed at which it would be really hard to take care of all of the people that would be bulldozed over by. This is what really crazy questions that no one can answer for me.
When I sit with these people that are AI CEO, so I go, so what happens to the people, if this is, if you agree that this is going to happen at super speed, you know, I've spoke to that CEO of Uber Dara who said, very similar things to what you're saying is, you know, they'll be labed data labing jobs, for example, for the drivers.
But they can't all become data lablers. And there's a question around meaning and purpose and fulfillment. And that comes from losing your meaning in life. I also sit here with so many people who talk about how their father lost their job in Iran or some, some other country and came to the United States and had to be a,
a toilet cleaner on particular case. It was a doctor in Iran, but came to the US and was a toilet cleaner. And had to deal with the sense of shame that that particular person fell. And the lack of dignity that that caused and how that made that person self-esteem failed in the depression and alcoholism that transpired from that.
If this happens at a large scale across society, there's going to be a ton of consequences like that. I mean, this is, this is like the core themes of my work.
And the reason why I'm critical of these companies is that they are creating technologies
in a way that creates the haves and have knots in an extreme form that we have. It's exacerbating the inequality that we already see in the world. Like the people who have things will have way more riches. They'll have way more free time. They'll be allowed to be more human. But the people who don't have things are even being squeezed even more.
And it's not just from a work perspective. I mean, I talk in my book also about the environmental and public health crisis that these companies have created where they are building these colossal supercomputer facilities and in communities all around the world. And they specifically pick some of the most vulnerable communities.
We're sitting in Texas right now. Open AI's largest, one of its largest data center projects is being built in abling in Texas as part of the Stargate initiative, which was an effort announced at the beginning of Trump's second administration
to spend $500 billion on AI computing infrastructure.
This facility consumes will, when it's finished, will consume more than a gigawatt of power, which is over 20%. So this is actually a little bit inaccurate now. This was something that circulated online for a while, but there's an updated numbers. Just for someone that can't see because they're listening once more to find something.
It's a picture of the size of this facility. So this is not the abling in Texas one. This is a meta facility. So let's do a talk about opening AI's facility in Texas. That one would be the size of central park.
And it would warn a million computer chips. And it would require the power of more than 20% of New York City. Do you know one of the things which I found confusing, so I'd like to like leave it in the distance?
“Is I thought you were saying earlier that you didn't think the job disruption promises were real?”
No, what I was saying is that when we talk about what these executives predict about the future,
we need to understand that they are ultimately trying to influence the public in a way
that allows them to continue maintaining control over the technology. So actively do you think that the job disruption that they talk about were real? Well, I want to comment specifically on like this chart. But it's like we've already seen in job reports that there is a restructuring of the economy happening right now. Yeah.
But going back to like the data site. So this super computer facility, it's a meta super computer facility is being built in Louisiana. And it would be four times the size of the abling in Texas one. And use half of the average power demand of New York City. So it's one fifth the size of Manhattan.
This makes it seem like almost all of Manhattan. But it's it would be one fifth the size of Manhattan.
“When these facilities go into these communities, what happens?”
Power utility increases, grid reliability decreases.
The facilities also need fresh water to generate the power for powering them ...
And there have been lots of documented stories of communities that already really constrained in their fresh water resource.
“They're under a drought when a facility comes in.”
And then there are people. The community is actually like competing with this facility for fresh water. I talk about one of those communities in my book. And also sometimes these facilities instead of connecting to the grid. They instead of a power plant pops up next to it.
So in Memphis, Tennessee where Musk built Colossus,
the super computer for training rock, he used 35 methane gas turbines to power the facility. This is a working class community, a black and brown community, a rural community. That was not even told that they would be the hosts of this facility. And they discovered it because they literally smelled what seemed like a gas leak in all of their living rooms. And that's when they discovered that these methane gas turbines were taking away their right to clean air.
And this is a community that's already been facing a history of environmental racism. They had already had lots of struggles to access their right to clean air. And now there's this huge supercomputer that's landed in their midst that is pumping thousands of tons of toxins into their air, exacerbating the asmatic symptoms of the children, exacerbating the respiratory illnesses of other people. It's one of the communities that has the highest rates of lung cancer.
And so and that could suit super computers taking their jobs. And then they also have super computer taking their jobs. So this is what I mean is like the halves and have nods are fundamentally being pulled apart even further. Like if you in this version of Silicon Valley's future are in the misfortune category of being a have not. You're talking about you now getting a job that is way worse than what you had because you might be doing data annotation.
And you might be treated as a machine rather than as a human to extract value the value of your labor for perpetuating this labor automating machine that these people are building. You might be competing with these facilities for fresh water resources. They're also polluting your air. Your bills have increased so the affordability crisis is getting worse.
“Like how is that making people able to be more human?”
What do we do about it? Yes.
Okay. So one of the analogies that I always use is AI is like the word transportation. Transportation can literally refer to everything from a bicycle to a rocket.
And we have nuanced conversations about transportation where we always say we need a transition our transportation towards more sustainable options. The transition towards you know public transport electric vehicles and we know we don't ever say everyone should get a rocket to do every to serve all the transportation needs right like we're in Austin. If you use a rocket to fly from Dallas to Austin like that would just make not no sense it's just disproportionate use of resources to get the benefit of getting from point A to point B.
This is how we should think about AI. So all of the models that we've been talking about I like to think of them as the rockets of AI. They use an extraordinary amount of resources and they provide benefit some dramatic benefit to some people. But there are also great exacting an extraordinary cost on a large swath of people because of the like the costs of developing this technology. And why don't we build more bicycles of AI? This is things like deep minds alpha fold which is a system that predicts how proteins will fold based on amino acids.
“You can say this is really important for accelerating drug discovery for understanding human disease and at one the Nobel Prize in chemistry in 2024.”
And the reason why it's a bicycle of AI is because you're using small curated datasets. You're just you just have data that has amino acid sequences and protein folding. So that means you need significantly less computational resources to develop the system which means significantly less energy which means less emissions so on and so forth. And you're providing enormous benefit to people. It feels like the horses left the stable in this regard because they've already taken people's IP. They've taken media. They train on this podcast. We know they do because it shows that they do.
I think there's a button actually in the back in the YouTube channel that allows you just to click it and it says we will train on your YouTube channel.
The horses kind of left.
Here's the thing. If the horse truly had left the stables, they wouldn't have to train on anything anymore. Why is it that their appetite for data has actually expanded?
It's because in order to build the next generations of their technologies in order to have the technologies continue to be relevant and continue to update with the pace of new knowledge creation and societies of all of them. To train again and again and again and again and again. And why are they employing actually more and more and more data adaptation workers over time. It's because they need more and more of that work. They believe it. I've been reporting on data adaptation work for over seven years now and it's not gone down. It's gone. It's increased.
“Do you think there's any chance of it going down? Do you think there's any chance of this sort of brute force scaling approach where you take data?”
You take computational power energy and you have data lablers and building out more and more parameters for the models.
Do you think there's any chance it's going to stop or go in a different direction other than the one that's going in now? I would love to reframe the question and say, what should we be doing in this moment where it's not going down? Where we do recognize that actually these companies in this moment need continued resources inputs and labor to perpetuate what they are doing. Yeah, because this sounds like stop and I just feel like stop is like a hard. It feels like I just think, you know, with the government in place that's supporting these companies like crazy globally this is happening.
So I'm like stop doesn't feel. I always say we need to break up the empire and we need to develop alternatives and we are already seeing a flourishing of incredible grassroots movements that are applying an enormous amount of pressure to the way that the empire is trying to unfold. It's agenda. 80% of Americans in the most recent pool think that the AI industry need to be regulated.
“When was the last time that 80% of Americans were on the same side of an issue?”
No, yeah, when I have these conversations on the podcast, the comment section are clear. Yeah. There's no disagreement. There's no one in there going, oh no, I think they should crack on. Yeah, dozens of protests against data centers have broken out all around this country and the U.S. all around the world. So what do we do about it?
So these are people that are doing something about it. They are actually reasserting their agency and exercising democratic contestation against the ways that the empires are going about the business. What goal should we be aiming at? So if I said to my audience, Janet, because this is kind of what I see in the comments is hopelessness. What can I do?
Yeah, well, the goal is not that we completely get rid of this technology. The goal is that these companies need to stop empires. And the way I define like a typical business versus an empire is that the empires are predicated on this idea that they do not have to provide a fair exchange of value with the workers who work for them or the people who use them or all of the other people that are involved in the supply chain of producing and deploying these technologies, they can extract and exploit and extract and exploit and get more value than what they offer.
Whereas typical businesses, there is a fair exchange. You buy a service, you feel like you got the same amount of value as the service they provided. But like for these data and attention workers, for example, they do not feel in any way that they're being paid the same value that they provide to these companies. So that's like for me, the North Star is like, we should be pushing back and holding accountable these companies when they operate in an imperial way. And that's what we've seen with all of these people that are now literally protesting the streets against data centers and having an enormous effect by the way, actually stalling data center projects and also completely banning data centers from being developed in their localities.
“We're seeing that with artists and writers that are suing these companies for intellectual property infringement and creating a huge public conversation about what is it that we actually, how do we actually want to protect our intellectual property?”
It's like three weeks ago, I met Megan Garcia who is the mother of school sets for the third, who is the 14-year-old who died by suicide after being sexually groomed by a character as a child.
And she, when that happened, I mean, obviously was incredibly devastated by what had happened to her son. She also just had to do something about it. She sued the companies and that lawsuit then sparked many other parents and families who were actually experiencing similar things to sue these companies as well. That has created an enormous public conversation about what these companies are actually doing when they exploit and they extract what is the cost to the lives of people around the world, including children.
So what do you think my audience should do? If they agree with everything written in your bookage, empire of AI, dreams and nightmares and soundmautments open AI, if they agree with everything said here, if they agree with everything we've discussed today, they can turn about their kids, they don't want everyone to become data lablers, they don't think that's a particularly great solution. What can they actually go and do?
When I was writing the book, the only discourse that was happening was this i...
Because of all the actions of these people, like saying when they're not happy with the things that these companies are doing, we now have 80% of Americans that want to regulate this industry.
And so I would say to people, think about all the ways that your life intersects with the resources and that the air industry needs to perpetuate what they do. And also the space is that they would need to deploy these technologies to continue having broad-based adoption in their work.
“So your data donor to these companies, you could withhold that data and that's what those artists and writers are doing.”
They're suing these companies to try and create mechanisms by which that data would then be withheld. You probably have a data center popping up around you. If you're at a school environment or a company environment, you're probably having a discussion in those environments right now about what should the AI adoption policy be. And these companies, they like, I was talking with some open-an employees just the other day. And they were telling me that it's understood internally that the revenue targets for the company are extraordinary.
And they need things to go flawlessly for it to all work out. And so they would need every single person to adopt this, every single space to adopt this. They would need to be able to build their data centers out the speed that they're trying to build them. And so what I would say to everyone of your viewers is, let's not make it go flawlessly if we don't agree with what they are doing. Okay, I agree. And then let's build alternatives because the thing is what I'm saying is not that these technologies don't have utility.
It's that specifically the political economy that is emerged to support the production of these technologies right now is exacting a lot of harm on people. But we have research that shows that the very same capabilities could be developed with much more efficient methods with much less resource. And we have a lot of different other AI systems out of disposal that are like the bicycles of AI that we also know provide extraordinary benefit at very little cost. So let's break up the empire and let's forge new paths of AI development that are broadly beneficial to everyone.
“It's strange. I'm quite, I think I'm, I've trained myself to deal with dichotomies in my head. And this for me is such is a dichotomy where I, as a CEO and as a founder as an entrepreneur and someone that loves technology, I think it's incredible.”
It's actually incredible AI. It's just so amazing and incredible. The things that's enabled me to do and create.
Yeah, because that's designed to enable people like you. And my kind of driving in the morning and being safer and credible. I think, you know, the billion odd people that use AI tools or tragedy or whatever it might be, then probably say that it's added value to their life, but. And this is the part that people find confusing that you can, and I like investing companies that are heavily using AI. But is it possible to think that is true and also think that there are significant unintended consequences, which technology and history of technology should have taught us to take a moment to post talk about.
“Because I think this is absolutely like you can have both of these things in your head. And what I'm saying is that this tension doesn't have to be attention.”
Because we could actually preserve the utility and benefits of these technologies, but actually develop and design them in a different way that doesn't have all of these unintended consequences. Yes, and I think there needs to be a big social conversation, which is why I have so many conversations about AI and issue like, that needs to be a big social conversation about being intentional about the social impact. The social and environmental impact. And that conversation is not being had in the in government from what I can see the conversation takes place in the industry.
And actually trying to pull it out of the industry and open people's minds to it is hopefully what we've been doing over the last couple of months with the subject. And it is just been happening everywhere outside of the industry and for local governments and state level governments. There have been huge conversations about this everywhere like I've been on book tour.
I've been to dozens of cities around the world. People are having these crucial conversations everywhere. I have not gone to a single city.
Yeah, it's everywhere. Even here in South Wales. Yeah, I'm gone to a single city where the room is not packed and people are not wrestling with the same exact questions that's every other person in every other room that I've been in. Speaking of Pat rooms, I know you've got to go because you've got to talk today. We've got to last question, which is the closing tradition of this podcast. How would your advice to a friend with a terminal diagnosis differ from what you would do yourself? That's a great question.
From what you would do yourself.
Well, I think it's a good thing you're not taking it easy because you're leading a conversation which is incredibly important. And I think that's the thing. I think the conversation is the important thing.
“And so because of algorithms and echo chambers, it's so rare to have a conversation these days especially a long form one like this. So I think the same point in your book is, for anyone that's curious about.”
I think a lot of people would have learnt a lot of stuff today because I sit here with and interview AI people all the time and I've learnt so much today from reading your book an extensive objective perspective that your book takes. You're able to unravel all of these stories that we sometimes see in tweets and we don't know if they're true or not because you've gone and met the people and you've done your research and you're incredibly intelligent person. Extremely intelligent person who clearly has humanities interests as your North Star and that shows up in everything you do and everything you say. So please continue to fight in the way that you are. Because it's an incredibly important one is people like you that are. I think galvanizing the world to take the collective action that we're starting to see everywhere.
“Empire of AI dreams and nightmares and some holdments open AI by Karen Hal. I'll link it below for anyone that wants to read this book. I highly recommend you do it so New York Times best seller for good reason.”
Karen, thank you.
Thank you so much, Stephen.
“I'm Teresa and my experience in all entrepreneurs starts a job as a company. I know a job is already on the first day.”
And the job makes me no problem. I have a lot of problems but the platform is not one of them.
I have the feeling that Shopify and its platform can only be optimized. Everything is super simple, integrated and unique. And the time and the money that I can no longer invest in there.
For all of this in Waksthong.
Now, the cost of the test on Shopify.de.


