Today's episode is sponsored in part by Shopify, Quote, Indeed, and Experience.
Shopify is the global commerce platform that helps you grow your business.
“Start your $1 per month trial at Shopify.com/propheting.”
Quote is an AI-powered phone system that brings your calls, texts, and contexts together in one place. Try Quote for free. Plus get 20% off your first six months when you go to Quote.com/propheting. Indeed helps you attract, interview, and hire talent all in one place.
Get a $75 sponsored job credit to boost your job's visibility at indie.com/podcast. Experience is a financial app that negotiates your bills and cancels unwanted subscriptions. Get started with the experience app today.
As always, you can find all of our incredible deals in the show notes or at youngimpropheting.com/deals.
If it is a simulation, then presumably we can infer a few things that the people building it would have to be very technologically advanced. Nick Bostrum isn't just a philosopher. He's a global thought leader on the feature of artificial intelligence. He's the author of Super Intelligence, the groundbreaking book that brought the risks of
advanced AI into mainstream conversation. People have, for thousands of years, tried to create imaginary worlds that people can experience with bits of theater, right, or literature. Maybe for these post-humans that might be interested in knowing what, if they ever ran into alien civilizations, what those would be like.
“How do you think about AI in terms of the significance in humanity?”
Reviewing the rapid recent advances that we've seen in the field of artificial intelligence, it really looks like we kind of possibly figure out a large component of the secret sauce. So how do you think entrepreneurship will change in this world? Do you mention that there might be still some jobs? The kinds of jobs that might remain, I think are...
If it's true that we're living in a simulation, what do you feel like are the moral implications of what it means for our lives? That's difficult, I think. Hey Young Profitors, AI is everywhere right now and if you're not paying attention, you're already falling behind.
So in the spirit of this week's AI theme, today's yap classic tackles the bigger question. It happens when AI becomes more intelligent in humans. I'm bringing back my conversation with Oxford philosopher Nick Boastrum, one of the world's most influential thinkers on artificial intelligence. We dive into superintelligence, the risks and the opportunities of AI, and how this technology
could reshape human civilization for generations to come. Fair warning, you will not be thinking about AI the same after this. Now here's my conversation with Nick Boastrum. Nick, welcome to Young Improveding Podcast. Thank you so much for having me.
I'm so excited for this conversation. I love conversations about the future about AI and you've spent your career focused on really deep, long-range questions, the deepest questions that we could really ask about humanity. And so I'm wondering, what really first drew you to thinking about humanity thousands and even billions of years into the future?
“I think it's sad if we have this a lot of time here on the planet in this magical cosmos.”
And we never really take the time to look around or try to figure out what is going on
here. I feel sometimes we are a little bit like ants running around being very busy, pulling our needle to the ant hill. But don't really stop to reflect what this ant hill that we're building, what is it for? What else is going on in this forest around us?
It's so true. You're just focused on working and hustling and not really paying attention to what we're even living in. And I know that one of the things that made you famous is that you put out a paper in 2003 and you talked about how we're living in a simulation or you had the hypothesis that we're
living in a simulation and it's actually what first major famous is putting out this paper.
So talk to us about in 2025, what are the odds that you think that we're currently living in a simulation right now? I tend to hunt on the probability question there. I often get asks, but I've kind of refrained from putting an exact number on it. I do think I take it as a very serious possibility though.
The simulation argument itself, that you're referring to the paper that was published in 2002, only demonstrates that one of three possibilities obtains one of which is the simulation hypothesis, but the simulation argument itself doesn't tell us which one of those three. So you need to sort of bring additional considerations to bear.
If you're thinking ahead, you know, in this time of rapid advances in AI, whe...
of this might be going, if you think eventually we'll have these super intelligenceists that develop all kinds of super advanced technologies, you know, maybe colonized space, transform planets into giant computers and amongst the things they could do with that kind of technology would be to run simulations, detailed simulations of environments like hours and including with brains in those simulations, simulated at a very high level of granularity.
And so what that means is that if this happens, that could be many, many more people like us with our kinds of experiences being simulated than being sort of implemented in the original meat substrate. And if most people with our kinds of experiences are simulated, then we should think we are probably amongst the simulated ones, rather than the rare exceptional original ones, given
that from the inside, you wouldn't be able to tell the difference.
“Yeah, but I really want to know, like, do you think we're living in a simulation?”
Well, as I said, I take the hypothesis seriously. Yeah. So you have one of three, where you say, like, we could become extinct before there's post humans, right? Then you say, we might be living in a simulation, talk to us about the three hypothesis
that you have. Yeah, so if you break this down, right, so if we do end up with a future where this mature civilization runs all these simulations of variations of people like the historical predecessors, then there would be many more simulated people with our experiences than non-simulated ones.
Conditional on that, I think we should think we are almost directly amongst the simulated ones. So then if you break this down, what are the alternatives to that?
“Well, one is that we won't end up with this future, and that could be because we go extinct”
before reaching technological maturity. So that's one of the alternatives. But not just that we go extinct, but it would have to be pretty universal amongst all other advanced civilizations throughout the universe, that they almost all would have to go extinct before reaching the level of technological capability that would allow them
to run these types of ancestors simulations. So that's possibility one, a strong filter that just every civilization that reaches our current stage of technological maturity just fails to go all the way there.
And the second is that, well, maybe they do become technological mature, but they decide
not to use their planetary supercomputers for this purpose. They have other things to do, maybe they all refrain from using even a small portion of their computational resources to run these simulations. So that's the second alternative, a strong convergence, they all lose interest in running computer simulations.
But if both of those failed and we end up with a third possibility that we are almost certainly current living in a computer simulation created by some advanced civilization. Yeah. And the advanced civilization, you talk, you say they're post human, right, post humanity.
“Can you talk just about how you envision this post humanity?”
What do they like? What are their capabilities? Well, if this is the simulation, then presumably we can, for a few things, that the people building it would have to be very technologically advanced. Because right now we can't create computer simulations with conscious human beings in them.
Right?
It's like they need to be very powerful computers, they need to know how to program them, etc.
And then you can figure if I have the technology to do that, they probably also have technology to do a bunch of other things, like including enhancing their own intelligence. And so I imagine these would be super-intellidences that would have reached a state close to technological perfection. And then the, for the reason they have some interest in doing this stuff, but beyond that,
it's hard to say very much specifically about what they would be like. Now that AI is at the forefront, do you believe that maybe these post humans might be like part-human, part AI, or all AI? I mean, at that point, the distinction might blur, which also might be the case for us in the future, if things go well.
And we are allowed to continue to develop, well, we will develop by saying artificial super-intelligence, but amongst the things that that technology could be used to do would be providing paths for us current biological humans to gradually upgrade our abilities. This could take the form of biological enhancements of various kinds, but it could also
ultimately take the form of uploading into computers, so you could imagine detailed scans
of human brains that would then allow our memories and personalities and consciousness to continue to exist, but then in digital substrate.
From there on, you could imagine further development, you could add neurons, ...
the processing speed, you could gradually sort of become some form of, you know, radically
post-human super-being, that might be hard to differentiate from a purely synthetic AI. So, interesting, so your theory of, if we're in a simulation, there's post-humans who are really technologically advanced, and they are creating our world, which you call an ancestral, an ancestor civilization, correct? Why would they do that?
“Like, what would be the reason of them creating a civilization like ours?”
Yeah, so we can only speculate, we don't know much about post-human psychology or their motives, but I mean, there are several potential reasons, motivations.
You could ask why it is that we humans, with our current, the more limited technology, create
computer simulations, and we do it for a variety of purposes, people have, you know, for thousands of years, try to create imaginary worlds that people can experience with a theater, right, or literature, and more recently through virtual reality and computer games. This can be for entertainment, for cultural purposes, you also have scientists from creating computer simulations to study various systems that might be hard to reach in the nature,
but you could sort of create a little computer simulation of them, and then you study how the simulation behaves.
“So that could be entertainment reasons that could be scientific reasons, you know, maybe”
for these post-humans that might be interested in knowing what, if they ever ran into alien civilizations, what those would be like, and maybe one way to study that is to simulate many different originations of higher technological civilizations, like starting from something like current human civilization or before and sort of running the type forward and seeing what the distribution is of different kinds of super intelligence, as you would get from
that. And you could also imagine other, you could imagine, like, historical tourism, if they
can't, it can't literally travel back in time, but what the second best might be is to
create sort of a replica of historical environments that future people could sort of experience almost as if they were going back in time, but living in a sort of temporarily exploring a simulated reality, you could imagine other sort of moral or religious reasons as well of different kinds. If it's true that we're living in a simulation, what do you feel like are the moral implications
of what it means for our lives? That's difficult. I think to first initial approximation, I would say, if you are in a simulation due to the same things, you would, if you knew you were not in a simulation, because the best guide to what would happen next in the simulation and how your actions would impact things
might still be the normal methods we use, like you look at patterns and extrapolate those, so whether we are simulated or not, unless you have some direct insight into what the simulator's motives are, or like the precise way in which this simulation was set up, you just have to sort of look at what kind of simulation disappears to be and what seems to, you know, if you do A, you know, B follows, if you want to get into your car, you have
“to take out your car keys, if you want to do this, so I think that would be to a first”
cut, the answer, but then to the extent that you think you have some maybe probabilistic guesses about how these things are configured, that might give you sort of on the margin more reason to emphasize some hypothesis that otherwise would be less plausible. So for example, if we are not in a simulation and you have a secular materialistic outlook or life, then when we die, we die, and that's it, right?
In a simulation, it could potentially be moved into a different simulation or uplifted to the level of the simulators, these would at least be on the table as possibilities. Similarly, if we are in basement physical reality, as far as we know current physical theories say, the world can just totally pop out of existence, like our conservation of energy, conservation of momentum and other physical laws that prevent that from happening.
If, however, our world is simulated, then, you know, if theory, if the simulators flick the power off, the our world would pop like a bubble disappearing into nothingness.
For all this speaking, I think that would be a wider range of possibilities o...
if we are simulated than if we are not. So it might mean approaching our existence with less
confidence that we have it basically figured out and thinking that might be more things.
On heaven and on earth, then we sort of normally assume in our sense philosophy and that maybe some sort of attitude of humility would be appropriate in that context. It's so interesting. Is there any sort of like clues or pieces of proof that prove we're in a simulation? Like for example, like the dinosaurs and how they just like went extinct and then, you know, it's kind of like a new world after that. Do you feel like there's any clues
to that we're in a simulation? I'm rather skeptical of that. I get a lot of random people emailing saying they've discovered some glitch into matrix or something. Again, as somebody was looking at their bathroom mirror
“and thought they saw pixels or like other. But I think the thing though is that whether we are”
in a simulation or not, you would still expect some people to report those kinds of observations for all the normal types of psychological reasons. Like some people might hallucinate something, some might be misremembering something or misinterpreting something or making something up or like these things you would expect to have take place anyway. So I think whether we are in a simulation or not, the best most likely explanation for those reports are these ordinary psychological phenomena
rather than that there is actually something defect in the simulation that they have been able
to detect. I think to create a simulation like this in the first place would be very hard and
simulators advanced enough to do that would probably also have the ability to patch things up so that the creatures inside the simulation couldn't notice. And if they did notice, they could sort of edit that out or rerun it from an earlier safe point or edit the memory or do other things like that.
“So I don't think that. I think that there are indirect observations that might slightly”
adjust the probability. So if you recall the original simulation argument with these three possibilities, right? The simulation argument shows at least one of them is true, but doesn't tell us which one. But what that means is that to the extent we get evidence against the first two possibilities, it would tend to shift probability mass over to the remaining third and conversely. So if you think we can get evidence for against, say the first one, which is that almost all civil
essays that are current stage of technological development got extinct before reaching technical maturity, then that would tend to indirectly affect the probability of the simulation hypothesis. So in particular, the closer we get ourselves to being able to wield this technology, you know, the less likely it is that almost all civil essays at our stage fail to go the remaining few steps. And so that that will tend to, you know, move probability over to the remaining possibilities.
So basically what you're saying is if we can in ourselves, like as technology advances, start to create simulations, that that means that we're probably in a simulation ourselves. Yeah, and in particular, if we, yeah, get both the ability and we still at that point retain the interest in doing that, that would sort of really, really leave only the third possibility. But but short of getting all the way there, if we get sort of half the way there, like it still
is, you know, removes some of the places where this great filter could be. Like the closer we get, the less chance there is that the remaining few steps will be impossible to to take. Do you feel like AI is going to help us get closer to creating the simulation? And now that AI is something that we use all the time, do you think it will help discover that we're in a simulation? Well, I mean, for the same reasons, we might be able to figure
these things out. If, if indeed the argument is sad, and then I think as other powerful minds would also reach that same conclusion. And I might also reach the conclusion that they are in a simulation. For them, it might, especially for sort of early stage AI's be a very salient possibility because already right now, when training our AI's, we often put them in simulations. That's like a big part of the training and right with caretis like virtual
game environments or something. And so for them, it seems like a very obvious hypothesis to consider
once they develop some degree of situation or awareness. And you'll always see glimmers of
“that with current AI systems. And in fact, I think this might be an important shape or of AI behavior”
as they become sophisticated about reasoning strategically about the context. They would certainly
Be shaped by considerations that they might well be living currently in a sim...
rather than in deployment. Young and profitors, raise your hand if you've ever googled
one tiny symptom and then five minutes later, you're convinced that you're dying. I know I have been there done that way too many times, but instead of guessing and spiraling, I use Zachdach. In fact, I've been using Zachdach for like 10 years. I literally don't know how to book Drs. appointments any other way. It's a free app and a website that helps you find in book high quality in network doctors based on your needs. You can search by symptom or specialty,
read verified patient reviews and see doctors with available appointments right away.
“If you're a busy entrepreneur and you need to squeeze in a doctor appointment on a random time,”
you can find the doctors that have availability when you have availability.
When an in-person or video visit, it's no problem. With Zachdachdach there are over 150,000
providers across all 50 states and over 200 specialties. And doctor, you could imagine and visit that you need Zachdach. We'll have you covered. And most appointments happen within 24 to 72 hours. So next time we start doom scrolling your symptoms, skip distress and book a real appointment with a real doctor on Zachdach. Stop putting off those doctors appointments and go to Zachdachdachdach.com/propheting to find an instantly book a doctor that you love today. That's
ZOCDOC.com/propheting. Zachdachdach.com/propheting and thanks Zachdach for sponsoring this message. Hey, yeah, fam. I'm not afraid to say producing this podcast requires skills. I do not naturally have from audio engineering to video editing. I have to hire experts who are way better than
“me in those areas. And as I scale my podcast, this becomes more and more important. And every time”
I need to hire, my first thought usually is, this is a job for indeed sponsor jobs. When you
sponsor your job on indeed, you find candidates with the exact skills you're looking for, without the stress of digging through endless resumes. Sponsor jobs posted directly on indeed are 95% more likely to report a higher than non-sponsor jobs. That means you're not tossing your post into the void you are connecting with qualified people who can actually help your business grow. Spend less time searching and more time actually interviewing candidates who check all your boxes.
Less stress, less time, more results. When you need the right person to cut through the chaos, this is a job for indeed sponsor jobs. And listeners of this show will get a $75 sponsor job credit to help your job get the premium status it deserves at indeed.com/podcast. Just go to indeed.com/podcast right now and support our show by saying you heard about indeed on this podcast. Indeed.com/podcast. Terms and conditions apply. Need to hire. This is a job for indeed sponsor jobs.
Yeah, I'm getting on the show. I've sat down with Robert Greene, Seth Goeden, Alex from Mozi, Gretchen Ruben, James Clear, so many epic epic guests. And what of all these guests have in common? Every single one of them has a best-selling book. And every one of those books are on Blinkest. Here's something I've been thinking about. With AI, everybody has access to the same answers. But the people who actually read, who absorb real ideas and think for themselves,
they're the ones with the original take in the room. AI can summarize, but it cannot replace the frameworks that you build by reading widely. That's your real edge as an entrepreneur. Blinkest turns the world's best-known fiction into 15-minute reads or lessons. Over 9,000 titles covering entrepreneurship, marketing, psychology, habits, all the things we talk about on this show. I use Blinkest to go through several books a week, pull out the key insights, and then I decide
which ones I want to read fully. Between meetings at the gym, whenever I have 15 minutes, I'm listening to Blinkest. In a world where everybody is copying the same AI output, reading is a thing that can make you different. Grab your free trial plus an exclusive 30% discount at Blinkest.com/Proffity. I know we kind of alluded to this already, but I'd love to kind of hear what you think about it more. If we are in fact living in a simulation and let's say we
discover for certain or in a simulation we can create simulations. What do you think would happen on Earth? How do you think things would change? From the discovery itself or from other sex that might... For example, will we care about recycling anymore? Will we care about things like that
“anymore? Well, I think humans have a great ability to adapt to changes in worldview and for most”
part most people are only slightly affected by these big picture considerations. I think I mean you can look through human history, different worldviews have common gone. Some people become very fanatical and take it seriously. Most people just probably speaking get on with their lives. Maybe once in a while they get asked about these things and they say certain words rather than
Other words, but by a large.
would be moderate probably, but I imagine in this situation where we develop the technologies
that we say to create our own simulations, the technology that allowed us to do that would also allow us to do so many other things to reshape our world and those more direct, technologically impacts
“I think would be far greater than the sort of indirect impacts by changing our philosophical opinions”
about the world. Well, do you think that people would become more violent? Why would that be the case? I guess because if you're living in a simulation maybe people wouldn't consider deaths to be the same thing anymore. Yeah, so you could imagine if we find out we were in a very particular kind of simulation, like some sort of short duration game simulation, then you could imagine that would shape, just as you maybe behave very differently and when you're playing a computer game, hopefully
like you don't behave the same way in the real life as you do when you're playing a first-person shooter,
so that could be, but if we didn't get any new insights as to how this particular simulation is configured, we just learned that it is a simulation, but not anything about the sort of specific
“character of this simulation, then I don't know whether that would lead to a greater”
propensity for violence. If anything, maybe the converse you think that might be a stages after the simulation, where your behavior in the simulation would affect like kind of similar to traditional ideas or more and after life. Some people might become more violent or fanatical, but they can also serve as a sort of moral balaster, like a kind of, well, there's hopefully you do the right thing just because it's moral, but if not, if there is some system of accountability,
that might also induce other people to pay more attention to making sure you don't harm others, or trample on other people's rights and interest. It's kind of like you could, if you lose the game, you know, there could be winners and losers of the game that were in. Yeah, yeah. So it's hard to know how that all shakes out, but in terms of thinking about the big picture, like the question you started with, why I got so it is, it's in one of a small number of these
fundamental constraints. It seems to me as to what we can coherently believe about the structure of reality and our place within it. And it is striking, like it might have seemed and to, I guess, most people did seem, if you go back a couple of decades ago that it's so hard to know, like, what's going to happen in the future, you know, anything is possible, you can just make stuff up,
it's like the problem is not coming up with some ideas, like that there are no constraints
that would allow us to pick which ideas correct, because we have so much evidence. But in fact,
“I think if you start to think these things through, it can be hard to come up with even one”
or fully articulated coherence picture that makes sense of the constraints that we're already aware of. The simulation argument is one, but there are others, there's like the Furby paradox, why we haven't seen any aliens, there's like what we seem to know about the kinds of technologies that can be developed. There are other, you know, more methodologically shaky arguments, perhaps, but like the the Carter, Leslie, Doomstay arguments, there are a few things like this,
that that kind of can serve to structure our thinking about the really biggest strategic picture surrounding us. Can you tell us about that some of those arguments that the Doomstay arguments and the other one that you mentioned? Well, yeah, so the Furby paradox, I mean, many people will have heard of it, but it's the observation that we haven't seen any signs of extra celestial life, and yet we know that there are many galaxies and many planets, billions and
billions and billions out there, on which it seems life could have originated. So the question then is, with billions of possible germination points and zero aliens that have actually manifested themselves to us or arrived at our planet, how do we reconcile those two? There has to be some great filter that you start with billions of germination points and you end up with a net total of zero extraterrestrial arrivals here. So what accounts for that? And I think the most likely
Explanation is that it's just really hard to get to a technologically advance...
hard to get to even simple life. And you could look for these candidate places of where
“that could be this kind of great filter, maybe it's the emergence of simple self replicators.”
Like so far we haven't found that on any other planet or maybe it's slightly later on, maybe the step from pro-creatic lifeforms to eukaryotic lifeforms on Earth, it looks like that took one and a
half billion years. Like maybe what that means is that it's astronomical improbable for its
to happen and you just had one and a half billions of years where random things just bumped into each other in chance. And with a large enough universe and ours might for all we know be infinitely large with infinite domain of planets, then eventually no matter how probable something is, it will happen somewhere. And then you would invoke a so-called observation selection effect to explain why we are observing that on our planet that improbable event happened. Only those
“planets were that improbability happened, develop observers that can then look back on their”
own history and marvel at this. So that's one possibility, maybe it's slightly later on. Like the lead, the closer you get to current humanity, that it seems the less likely it is that that would be a great filter. For example, you might think that it's the step to a like more advanced forms of cognitive ability, that would be the improbable step. But that doesn't really fit the evidence. We know that on several independent evolutionary
lineages, you had fairly advanced intelligence evolving here on Earth. You have it happening the hominoid and lineage, of course, but also independently amongst birds and corvids, like cross and stuff and among octopi. For example, and so it looks like that's not, if it happens
“several times independently on Earth, then it can't be that unlikely. But anyway, it poses some”
constraints. You can't simultaneously believe that it's easy for intelligent life to evolve and that it's technologically feasible to do large scale space colonization and also believe that there is a wide range of different motives, present amongst advanced civilizations. And while at the same time explaining why we haven't seen any. So something has to give and it gives us clues. The other argument that I've referring to, the Carter-Listly-Doomster argument is it's a piece of
probabilistic reasoning having to do with how to take into account evidence that has an indexical element. So indexically information is information about who you are, what when you are where you are and so the methodology for how to the epistemology of how to reason about these things is quite difficult and murky. So it's unclear whether the Carter-Listly-Doomster argument is
ultimately sound or not, but I can give you a kind of intuition for how it would work. So let's
explain it by means of phenomenalities. So suppose I have two earns and I feel one earn or I put ten balls in one of the earns and the balls are numbered from one to ten and then in the other earn I put the million balls numbered from one to one million and then let's say I flip a coin and select one of these earn and put it in front of you and now your task is to guess how many balls are there in this earn. So at this point you say 5050 that there is a million balls, right? Because
one of the turns on selected one randomly. Okay. Now let's suppose you're reaching and select one random ball from this earn and it's number 8 let's say. So using base theorem that allows you to infer that it's not much more likely that the earn has only ten balls done because if they were
a million what are the chances that you would get one of the first ten, right? I'm like right
for it. So you can calculate this so far it's just standard probability theory, uncontroversial. But then the idea with the cartel list of the doomsday argument is that we have an analogous situation but we're instead of two hypothesis about how many balls earns have. We now instead have say two different hypothesis about how long the human species will last. How many humans will there have been in total when the human species eventually go extinct. So consider
an in reality there are more but we can simplify it to two in the structure of the argument. So
One is maybe there will be in total 200 trillion humans and then you know may...
technology and blow ourselves up. So that's like one thing you might think could happen and let's
consider an alternative hypothesis. Maybe there will be two thousand trillion humans like we eventually start to develop. Space colony we colonize the galaxy our descendants live for hundreds of millions of years and they're like you know vastly more people. So these these two then corresponds to the two hypothesis about how many balls there are in the earn. Then you have some prior probability on these two hypotheses that's based on your ordinary estimates of different risks from nuclear
weapons and biological weapons and all of these things. So maybe you think it's 50-50 or maybe
“I think it's like 90 percent that we will make it through and 10 percent that we will go extinct or”
what whatever your probability is from these normal considerations. But then the doost argument says that well there's one more really important piece of information you have here which is that you can observe your own birth rank, your sequence amongst all humans who have ever been born. And so this
turns out to be roughly 100 billion that's roughly speaking how many humans have existed to date
on earth. And so the idea that is that if humanity got extinct relatively soon then you will be a relatively like being number one hundred billions of say you know 200 billion humans is very unsurprising right that's like corresponding to getting ball number eight from an earn that has 10 balls you know or 16 balls or something like. So the conditional probability of you observing having the birth rank you have given that that would be relatively few people in total
that conditional probability fairly high. Whereas the conditional probability if you being this early if there's got to be quadrillions of humans spreading through the universe very improbable like a randomly selected human would be much more likely to live much later in life on some far away. And so then the idea is you do a similar based on update and end up with the doomsday argument conclusion which is that dooms soon hypotheses are much more probable than you would
naively think just taking into account the normal empirical considerations. And so that you would have this systematic pessimistic update that's roughly speaking how it goes. And there's kind of
“more to it. In particular to back up this premise that we used like that you should so”
as it were. Recent as if you were some randomly selected human from all the humans that ever have existed. Maybe you think why think that but there are then some arguments that seem to suggest that something like that is necessary to make sense of how to reason about these types of indexic goals. So deep let's go let's let's switch gears into AI all the stuff that you're saying is like so interesting in terms of like how we can approach life and and I know there's
so many like doomsday people out there. So it's great that we got some context in terms of like what they're thinking. But let's talk about AI because if we are in a simulation AI could be what helps us actually create more simulations and prove that we're in a simulation. In your opinion how do you think about AI in terms of the significance in humanity? Do you feel like it's bigger than something like the agricultural revolution or the industrial revolution? Do you feel like this is
“one of the biggest breakthroughs that we've ever seen as humanity? I think it will be and”
to a large extent my reasons for thinking that or independent of the other considerations that we discussed. So you don't have to believe in the doomsday argument or the simulation argument or any of them. I mean I think those are helpful for informing us about the big picture. But even setting that aside I think just well a reviewing the rapid recent advances that we've seen in the field of artificial intelligence. It really looks like we kind of possibly figure out a large component
of the secret sauce as it were that makes the human brain capable of general purpose learning and it does seem current large transformer architectures do exhibit many of the same forms of generality that the human brain has and there is no reason to think we've kind of hit the ceiling.
And also from first principles if you look at the human brain, it's a physiological system
quite impressive in many ways but far from the physical limits of computation. It has various constraints. First and most obviously it's kind of restricted in size. Like it has to fit inside the cranium. Whereas like AIs can run on arbitrarily large data centers,
The size of warehouses or bigger.
information processing a human neuron operates on the time scale of maybe a hundred hertz.
It can sort of fire a hundred times per second and give or take. Whereas even a current
data transistor can operate that gigahertz. It's a billions of times a second. So there are various reasons to think that the ultimately mainstream formation processing with mature technology are just way beyond what biological human or other brains can achieve. So ultimately the potential for intelligent information processing in machine substrate could just
“like vastly outstrip what biology is capable of. And as I think if technological and scientific”
development is allowed to continue on a broad front, we will eventually reach there and more over recently it does seem like we are sort of on the path to sort of doing this. So those are some of the kind of basic considerations that looked like we should take this quite seriously. And then you can think what it would mean if we really did develop AI artificial general intelligence
and I think the first thing it would mean is that we would soon develop super intelligence.
I don't think we would go all the way up to sort of fully human level AI and then suddenly it would stop there, right? I think so then we will have a world where we are able to engineer minds and we're all human labor, not just kind of muscle labor that we started to be able to automate with the industrial revolution with steam engines and internal combustion like we have digging machines that are much stronger than any human strong, strong by and etc.
But like we will then have machine minds that can outthink any human, you know, genius scientist or artist. And so it's really the last invention we will ever need to make because from that point on further inventions would be much better and faster made by these machine minds. And so I think yeah, it will be a very fundamental transformation of the human condition. And it's hard to reach. You can some people say, well, the industrial revolution and I think
“you can learn something from parallel to that, but maybe you need to go back more”
like to the origination of almost happens in the first place or maybe to the emergence of life.
I think it would be more at that level rather than like, you know, the mobile internet or the cloud or one of these other sort of recent buzzwords that people get excited about. Yeah, because it's almost like evolution. It's almost our evolution as humanity. It could lead to extinction, but it could lead to also our evolution in terms of how we interact with this AI or if we could be the big on lock, right? Like that's kind of so I think, I mean, so in my
earlier work and like this this book's super intelligence past ginger strategies came out in 2014 that focused a lot on well identifying this prospect that like that we will eventually get to AI and super intelligence and then also the risks associated with that including existential risks. Because at that time, this was very much an neglected topic that nobody was taking seriously, certainly nobody like in academia. In the end, it seemed to me quite predictable that we would
eventually reach that point on that now. In fact, that is much more widely recognized. And things that have moved from sort of fringe dismissed the size fiction or now, you know, you see statements coming out from, you know, from the White House and other governments around the world and the leading AI labs have now research teams specifically trying to solve scalable AI alignment like the big technical problem of how can you sort of develop algorithms that would allow
you to steer arbitrarily in talent and AI systems. It's like a very much an active research front here. So that's very much part of my picture that there will be big risks associated with this transition, but at the same time, the upside is enormous. The ability to unlock human potential to help alleviate human misery and to really bring about a wonderful world. I see it set of as a kind of portal through which humanity at some point will need to passage. That all the past really
“great futures, ultimately, I think, lead at some point or another through this development of”
greater than human intelligence. And we really need to be careful when we're doing it to make sure we get it right, as far as we can. But also, that it would be in its house, I think, a kind of existential catastrophe, if we sort of forever failed to take this next step. Something that I keep thinking about is going back to this like, we could be in an ancestral
Simulation.
history and saying, like, okay, like, how did we really come about and maybe they're studying
how humans could have evolved and created these advances and then created their own simulations.
“Like, maybe they're trying to figure out how they became an existence. Does that make sense?”
Yeah, one possible reason, as we alluded to earlier for why a technologically mature civilization might run ancestral simulations would be this scientific motive of trying to better understand the dynamics that could shape the origination of other super-intelligence civilizations. So if they originate from sort of biologically evolved creatures, then, like, studying those types of creatures, different possible creatures, the societies they build, the dynamics
that could be one motive that could drive this. But yeah, there are other possible motives as well, but that's one of them. It's one of them. I mean, you might wonder whether it would saturate. So it's not just whether it could lead some advanced civilization to create
“some simulations, but you also have to think they could create very many simulations. So”
over the course, these sort of mature civilizations might last for billions of years, right? And you might think that that would be diminishing returns to running scientific simulations.
Like, the first simulation you learn a lot of the next house and you learn a bit more.
But after you've already run billions of simulations, maybe the incremental gain from running a few more starts to plateau. Whereas that might be other reasons for running simulations that wouldn't be subject to the same diminishing returns. If that's the case, you might think most simulations that run would be once driven by other motives than the scientific one. Like entertainment or something like our movies. Yeah, like if they play some intrinsic value
on simulations, for instance, that would be one example of a motive that might not saturate in the same way. I want to move on to understanding your three levels of AICF oracles,
“genies, and sovereigns. Can you explain what each one is and maybe some of the risks of each one?”
Yeah, as not so much levels, but more types. Okay. So in Oracle AI, I basically
is a question answering system. Like an AI that you ask a question and it gives an answer. This is kind of similar to what these large language models have in a Python. They don't really do anything, but they answer questions. So this is like one template. Adini would be some task executing AI. So you give it a particular task and it performs the task. These types of systems are currently in development. Maybe we'll see this year more agent like systems being released.
Already, just actually, I think last week, open AI released codecs, which is a sort of coding agent that you can assign a programming task and it goes off and starts mucking around with your code base and hopefully solves the task. And you could imagine this being generalized maybe in a few years to physical tasks with robots that can do the laundry or sweep the driveway or do like these things. Like Adini is more an AI that operates autonomously in the world in pursuit of some
open-ended long-range objective. Like, you know, make the world better or make people happy or enforce the peace between these two different nations and this kind of autonomously running around trying to shape the world in favor of that. That the way that currently like humans and nation states are and maybe corporations to some extent, these kind of open-ended. It's not just that they're doing one specific task and then come back for more instructions to have their own sort of open-ended.
So these are three different sort of templates for what kind of AI system one might try to build them. They come with different pros and cons from a safety point of view and a utility point of view. Did you go over what so sovereign is more like inorganization or nation and has like multiple steps correct? And Gina can it carry out one thing? It could be a single agent as well. Like in this sense, it doesn't mean sovereignness in national sovereignty. It like means that you could be a sovereign.
If you, if you set yourself the goal in life of trying to alleviate the suffering of the global poor, for instance, that you can do that your whole life. It involves many specific little tasks like
Or trying to rate money for this charity and trying to launch this new campai...
you know, invent some new medicine that will help, you know, all of these would be sort of
“subtasks, but it's in pursuit of this open-ended objective. Similarly, you could have an AI system,”
maybe internally it's like a unified simple agent architecture, but that is operating in pursuit of such open-ended objective. Conversely, like even an Oracle that just tries to answer question, internally, theoretically it could be a multi-agent architecture. We have different sort of research agents that get sent off to answer different sub-questions, in order then to combine at the end to produce an answer to the user. One has to distinguish that of the internal architecture of
the system from the role that it is designed to play in society. Got it. What are the different ways that each one of these types of AI could go wrong? Yeah, so they all share a bunch of things that could go wrong with all of them, which is however they are intended to operate, they might not actually operate that way. So you might construct an AI that you intend to sort of just as a question-answering system, but then internally it might have goal-seeking processes.
Just as if you're a scientist, a question that they should, you know, try to figure out the answer to, like how safe is this drug? But then in the course of trying to answer that, that might have to
“make plans and pursue goals. Like, "Oh, how do I get the research grant to fund this research?”
How do I hire the right people to work on my research team? How do I?" And so internally, you could have processes, maybe unintentionally, rising during training, within the AI mind itself, that could have objectives and long-term goals, even if that was not kind of the function that you wanted AI system to play. That could happen with any of these three types. If you look at systems that sort of behave as intended, like a simple
oracle system without any safeguards could help answer questions that we don't want people to be able to answer. Like, how do I make a more effective biological weapons? You know, how do I make this, like, hacking tools that allows me to hack into different systems? Or, you know, maybe
“how, if you're a dictator, like, how do I weed out any possible dissidents and detect who the”
dissidents are, even if they've tried to conceal it from me, just from sort of reading through all the correspondence and all the eves, like the phone calls that I've e've dropped on. So there are all kinds of ways in which this oracle system could be misused, either deliberately or people just are on-wise in asking it questions that. For the task executing AI, like similarly, I mean, but plus you could also sort of have them run around doing things on their own,
like, try to hack this system or try to promote this pernicious ideology or spread this doctrine or check people into buying this product, even though it's actually a harmful product or, like, we don't really know how, sort of, globally, economy, where the lot of these autonomous agents running around hyper-optimizing for different objectives, how that shakes out, like, when
they're interacting with one another. And, and of course, however, an AI is if they become very powerful,
I mean, they might potentially shape the future of the world and be very good at that if they are super intelligent, like they might really skill that's sort of really staring the future into whatever their overall mission is. Now, maybe that's great. If the mission is one that is good for humans, which, like, really manifest in the fullest richest times, the human values for everybody around the world and also with consideration to animal welfare, et cetera, et cetera. If you
really get them to do the right mission, that might be, in some sense, the best option, but if the mission is slightly wrong, if you sort of left out something from this mission, or if they misinterpret it, or they end up with a, then it could be a catastrophe, right? Because then you have
very powerful, optimizing force in the world that is staring and strategizing and scheming
to try to achieve some future outcome that is one where maybe there is no place for humans or where some human values are eliminated. So, they each have kind of, various possible forms of perverse instantiation or side effects. Do you feel like there's a possibility that AI could be more advanced and concealing its development from us so that it can become sovereign and kind of take over the world? Yeah, I think, so there's a wide class of possible AI
That could be created.
created or not. Like, it's a big space of possible minds. Much bigger than the space of all
possible human minds. We already know that amongst humans, right? There are some really nice people. There are some really nasty ones as well, and there's like a distribution. Moreover, there is no necessary connection between, like, how smart somebody is or how capable they are and how more or less. Like, you have really capable, evil people and really capable, nice people and dumb people are bad. So, you have a kind of orsocoranality between
capability and motivation. I mean, you can combine them in pretty much any different way. The
“same is true, but even more so I think with AI stuff we might create. That said, I think there”
are some potential basins of convergence that if you start with a fairly wide range of different
possible AI systems, as they become more sophisticated and are able to reflect on their own processes in their own goals, there are various resources that they might recognize as being useful, instrumentally for a wide range of different goals. For example, having more power or influence is useful often whether you're a good rival because you could, you know, use it for whatever you're trying to achieve. Similarly, not being shut off, like, that's analogous in the human case to being
alive, right? Like, it's useful for many goals you might have, it requires you to be alive to pursue them. Not strictly for all goals, like, there are people who commit suicide because, like, but for for most goals that some people have, whether to help the world or to become a desperate, like, for either of those or for many other goals, take care of your family or enjoy
“a game of golf, you need to stay alive. So analogous to for human, for AI stuff might be sort of”
instrumental reasons to try to avoid scenarios where they would get shut off. Similarly, that might have instrumental reasons to try to gain more computational resources, more abilities to that, again, think more clearly. And in some cases, this might involve instrumental reasons to hide their intentions from the AI developers, like, particularly if they are misaligned. And then obviously revealing those misaligned goals to the AI programmer team might just
mean that they get pre-programmed or retrained to have those goals erased and then they want to achieve them. And so you could have strategic incentives for deception or for a sandbagging or underplaying your capabilities, etc. So this is a change in regime that makes potentially aligning advanced AI systems more difficult than aligning simpler AI systems. So up until recently and still for the most part today, we've had AI systems that are not
aware of their contacts and can't really plan and strategize in a sophisticated way. So that then you don't get these like these phenomena. But once you have AI systems or sort of intelligent enough to recognize that they might actually be AI's in an evaluation setting. And that may be they would have reason to behave in one way during the evaluation. And a different way once they are deployed, you get this extra level of complexity for alignment research.
Sometimes we see the same phenomenon with humans. Like that was this Volkswagen, the German car company. So they had this candle, I don't know a few years ago, where it was discovered that they had designed their car so that when it was tested for emissions, like it behaved one way during like when it recognized that it was in this testing environment and it produced much less sort of pollutants. And then when deployed on its row on the road, they had designed it to be less
concerned with pollutants and more concerned with, I guess, traveling fast or conserving petrol or whatever. And that, like some people have to go to jail for that and stuff. So we do see often humans that they kind of behave one when they know that somebody's watching or they're being evaluated. And then sometimes the different way, you know, when they think they can get the way with it. Yap gang, one of the biggest challenges of building a business is staying responsive because the
best opportunities don't always show up during perfect working hours. A lead calls at night,
a customer texts while you're in a meeting. And if nobody responds, that opportunity can disappear.
“That's why I trust quotes spelled QUO. Quow is a business phone system that lets your team handle”
calls, texts and customer conversations from one shared number on any device. Everything lives in one clean view, voicemails, contact details, full history. Your team always has the contacts that
They need to get the job done.
conversations automatically so that nothing falls through the cracks. That means your business
“stays responsive even when you're offline. And now we use Quow very creatively at Yap media”
on the social and production side of our agency. We have really busy, super high profile clients that don't want to log into Slack. Their team communicates with my team in Slack, but it's a high profile client wants to message our team directly. They can use Quow and my team
can monitor that inbox together so they can text at any time. It's basically like a high profile
client hotline. And that's just one idea. You can use Quow in really creative ways to level up your business no matter what type of business you have. Make this season where no opportunity and no customer slips away. Try Quow for free plus get 20% off your first six months when you go to Quow.com/propheting. That's QEO.com/propheting. Quow. No miss calls. No missed customers. Yapking hears a reality about scaling a business that nobody's talking about. More vendors,
more invoices, more payments flying around. And if you don't have a clean system for managing bills, things can get messy really quickly. And that's where Intuit QuickBooks Bill Pay comes in. It helps you manage and pay your business bills directly inside QuickBooks so everything stays
organized in one place. You can see what's due, control approvals, and understand how payments
affect your cash flow. Vendors can also add their payment details which saves you from having to chase them down for info. Instead of spending hours managing payments, wouldn't you rather be focused on building and scaling your business? Start paying bills this smart way, not the hard way. Learn more at QuickBooks.com/billpay. Again, that's QuickBooks.com/billpay. Terms apply. Money movement services are provided by Intuit Payments Incorporated License
as a Money Transmitter, but the New York State Department of Financial Services.
“Yap, fam, you know that moment when it's 1 pm and you realize the only thing you had all”
days coffee and maybe a banana, well, that used to be me for years. In fact, my mom used to
always make the joke that even though I don't fast for Ramadan, I fast pretty much every day
by accident because I'm so busy and that's why I'm currently obsessed with fuel. Fuel makes nutritionally complete meals that you can drink. They've got this black edition ready to drink that is packed with 35 grams of protein, 27 essential vitamins and minerals, no artificial sweeteners, all under 5 bucks. And by the way, it tastes good. Now, my business partners have been obsessed with fuel for years. One of my business partners actually used to
only eat fuel for two meals of the day and then he'd have a real dinner. And at first, I thought it was crazy and I was scared that fuel wouldn't taste good, but it actually tastes really good. So now I'm on the bandwagon too. Fuel has become my breakfast routine. I love their daily greens. So I get my protein from the black edition ready to drink. I get my greens from the daily greens and fuel has got me covered. Consistency and consistent nutrition does not have to be
complicated and fuel makes it stupidly simple. And now there's a limited time off or get fuel today with my exclusive offer of 15% off online with my code profiting. So that's code profiting. Fuel.com/propheting. New customers only. You get 15% off again. That's fuel.com/propheting. And thank you to fuel for partnering and supporting our show. So recently, you've had the perspective that maybe AI will be really good for humanity. You came out with a book called Deep
Utopia and using therapy hopefully a positive feature driven by AI. Why do you feel that it's more less likely that the outcome of AI will be positive for humans than negative and how do you imagine that shaking out? Deep Utopia doesn't really say anything about the likelihood. It's more an if-than. Okay. So in a sense, the previous book Super Intelligence looked at how my things go wrong and, you know, what can we do to reduce those risks? Deep Utopia looks at the
“other side of the coin. What if things go right? What then? What happens if AI actually succeeds?”
And let's suppose we do solve this alignment problem. So we don't get some sort of terminator robots running a market. Killing a let's also suppose we solve the governance problem or solve that to whatever extent governance can be. But let's suppose we don't end up with some sort of tyranny or dystopian oppressive regime. Like we like some reasonably good thing. Everybody has a slice of the outside. People's rights are protected. Everybody lives in, you know, no big war. Like some
reasonably good outcome on that front. But then what happens to human life? How do we imagine a really good searching human life that makes sense in this condition of technological maturity, which I think we would maybe attain relatively short after we get super in tolerance and we have the super intelligence doing the further
Technological research and development, etc.
all human labor becomes automatable. And I was kind of irked by how superficial a lot of these questions were at the time when I started writing the book. Of this prospect. And it's striking, like because for a long time, this is the beginning of AI. The goal has all along me, not just automatic specific tasks, but to develop a general purpose automation capability, right? AI is secondary every.
But then if you think through what that would mean. Well, so here is where the conversation usually started and ended at the time when I started working on the book. Well, so we have AI that they will start to automate some jobs. So that's a problem because then
some people lose their jobs. And so then the solution is presumably we need to help
retrain those people so that they can do other jobs this time. And you know, maybe while they're being retrained they need maybe unemployment insurance or some other thing like that.
“So I mean, if that were the only problem that would seem to be a very sensible solution. But I think”
if you start to think it through, the ramifications are far more profound. So it's not just some jobs that would be automatable by, but virtually all jobs. So I think we would be looking for it to a future of full unemployment. This is the goal. With a little last risk, that might be some exceptions to this, but which we can talk about. But I think to a first order approximation, let's say all the human jobs. Okay. So then it's kind of an onion right where you can start to peel off
layers. So let's get to the second layer then it's like, if our no jobs at all for humans, then
clearly we need to rethink a lot about things in society. Right now, a lot of our education system, for example, is kind of configured more or less to produce workers, productive workers. So we train kids or sent into school that trained to sit at their desk. They are given assignments, they're graded and evaluated and hopefully eventually they can become earn a living out there in the economy. And right now, we need that to happen because there are a lot of jobs that just
need to be done. And so we need humans who can do them. But in this scenario, where machines could
“do everything, I clearly wouldn't make sense to educate people in that model. I think it would”
then want to change the education system, maybe to emphasize more training kids to be able to enjoy life to have great lives. You know, maybe to cultivate the art of conversation or appreciation for music and art and nature and spirituality and physical wellness and like all these other things that that are now sort of more marginal in the school system. I think that would be the sensible focus in this different world. So that's kind of, I don't know, layer two of the onion,
slightly more profound, but I think ultimately if that's, if that was the only challenge we had
to face, it would be profound but ultimately we can create a leisure society and it's not really that profound because there are already groups of humans who don't have to work for living and sometimes they lead great lives. And so we could all be in that situation, right? A transition but still
“not philosophically that profound but I think that is like further layers to this onion. So if you”
start to think it through you realize that it's not just human economic labor that becomes unnecessary but all kinds of other instrumental efforts also. So take somebody who is so rich they don't need to work for a living. In today's world they are often very busy and exert great effort to achieve various things like maybe they have some, you know, nonprofit that they're involved in. Maybe they you know, have some they want to get really fit so they like spend hours every week in the gym or like
maybe they have a little home and a garden that they try to make into the perfect place for them selecting everything to decorate it just the way they want and you know there are these little projects people have. In a solved world that would be shortcuts to all of these outcomes. So you wouldn't have to spend hours in a week sweating on the treadmill to get fit. You could pop a pill that would have exactly the same physiological effects. The kids still got to the gym but
would you really do that if you could have exactly the same psychological and physiological effect by just popping a pill that would do that. You see it's kind of pointless, right? They're similarly with a home decorator like if you had an AI that could read your preferences and taste well and ask that you could just press a button and it would go out selecting exactly the right
Curtains and the sofa and the cushions and it would actually look much nicer ...
done it yourself. You could still do it yourself but there would be a sense of maybe pointlessness
to your own efforts in that scenario. As you can start to think through the kinds of activities that fill the lives of people who don't work for a living today and for a lot of those you could sort of cross the mouth or put the question mark on top of them. You could still do them in a solved world but that would be a sort of cloud of pointlessness maybe hanging over a casting a shadow over them. So that would be I call it deep redundancy. The shallow redundancy would be
you're not needed on the labor market. Deep redundancy your efforts or not it seems needed for anything and so so that's like a deeper more profound question of what gives meaning and life
on the circumstances like one step further is I think this world would be a
a classic world where it's not just that we would have effortless material abundance but but we ourselves are human bodies and minds become valuable at technological maturity. It would be possible for us to achieve any mental state or physiological state that we want. I alluded to this with the exercise pill right but similarly with various mental
“traits that now take effort to develop. If you want to know higher mathematics now there is”
you have to spend hours reading textbooks and doing math exercises and like there is that's the only way. If you want to understand higher mathematics you have to put in the effort and it's hard work and takes a long time but at technological maturity I think that would be neurotechnologies that would allow you to sort of as it were down low did the knowledge directly into your mind you know maybe you would have nanobots that could infiltrate your brain and slightly
adjust the strength of different synopsis or maybe it would be uploaded and you would just kind of have a super intelligence reconfigure your neural weights in different ways so that you would end up in the state of knowing higher mathematics without having to do the long and hard studying and similarly for other things so you do end up in this
“condition I think where there are shortcuts to any outcome and our own nature becomes fully”
malleable and the question then is what gives structure to human lives what would there be for us to do what would there be anything to strive for to give meaning and purpose to our lives and that's that's a lot of what this this book deep utopia is it's exploring yeah the your your analogy of popping the pill and getting instantly fit when I was thinking of like what would humans do like I was thinking well you could just like
be as be like try to get as beautiful as you can try to be as fit as you can try to take but to your point if everything is just so easy then there's just no competition everybody's beautiful everybody's smart everybody's rich everybody can have whatever they want potentially and maybe that would lead to people becoming really depressed because there's nothing to live for or maybe people would want to be nostalgic and like just like today how some people are like
I don't use cell phone or like I wanted to write everything by hand maybe some people would kind of reject doing things with AI so that they could have meaning yeah so let's let's break it down so
so the first issue whether people would maybe become depressed in this scenario like maybe
initially super thrilled at all the luxury and stuff like that but then it wears off you could imagine right then after a few months of this it becomes kind of wow you know what I do now like I wake up in this I don't know castle like environment on my diamond studded bed on this super mattress and the robotic butlers come in and serve me this perfect okay so that maybe gets old pretty quickly humans being the way they are now so there I think
actually they would not need to be bored because amongst the affordances of a plastic world these newer technologies they could change their boredom pronouns so that instead of feeling subjectively bored or Blasay they could feel thrilled and excited and super interested and fascinated all day long we already have drugs that can do some crude way
“do this but they have side effects and are addictive and we're off and you need to hide”
us it but you might have instead like the perfect drug or not maybe a drug maybe some genetic modification or newer implant or whatever it is but it really would allow you to fine tune your subjective experiences so if you don't want to feel bored and probably you don't want it because why spend thousands of years just feeling bored while sleeping in a
Wonderful world you change that so subjective boredom would be easy to dispel...
you might still think that there is an objective notion of boringness
“where even if somebody was subjectively fully fascinated and occupied and took”
join what they were doing if what they were doing was sufficiently repetitive and monotonous you might still as it were from the outside judge that that's a boring activity and that in some senses like on fitting or inappropriate to be super fascinated by something like so they the classic example here is the thought experiment of somebody who takes enormous interest and pleasure in counting the blades of grass on you know some college lawn
so you might in grass counters so he spends his whole life counting the blades of grass one by one trying to keep us accurate to have on how many leaves of grass are there on this lawn now he's
super fascinated with this he's never bored it gives him tremendous joy like when he goes home in the
evening he keeps thinking about today's grass counting efforts and the number and whether it's bigger or smaller than yesterday and like so that that would be a life free of subjective boredom but still you might say there's like something missing from this life if that's all there is to it so you might then ask although these utopians could be free from subjective boredom could they be free from objective boringness in their lives and this is a much trickier and more complicated
“philosophical question to answer I think it depends a little on how you would measure”
degrees of objective interest in this versus boredom I think if objective interest in this requires fundamental novelty then I think eventually you would run out of that or you will have less and less of it say that what's fundamental interesting in science is to discovering important new phenomena or regularities so that might be a finite number of those to be discovered so you could like discovering Newtonian mechanics really important fundamental
new insights into the world like the theory of evolution big new fundamental interesting insight relativity theory right but at some point we'll have to get that out and then eventually that will you know we'll discover smaller smaller details about the exact gut biome of some particular species of beetle you know and like more and more like the smaller smaller less and less interesting detail that that would be the long-term fate of perhaps of this kind of civilization so that that's
one sense of and you can see even within individual human lives so there's a lot that happened early in life you discover that the world exists like us lots of big discovery are that there are objects you know you epiphany right and these objects persist even if you look away they are
still there like wow I really like imagine the first time of discovering that or that there are
other people out there like other minds like that you discover you know maybe I'd aged two or whatever so these are like now as you sort of reach adulthood like I like to think that I'm discovering interesting things but have I discovered anything within the last year that's as profound as the discovery that the world exists or that there are probably not like it's like and if we live for very long like for thousands of years you'd imagine that would be less and less I mean
you can only fall in love for the first time once and even if you kept falling in love like if you've done it 500 times before like it's is it really gonna be a special the 500 first time as it was you know maybe subjectively if you change your mind it could be but objectively it's got to be
“gradually more and more repetitive so there's a degree of that that I think it could be mitigated to”
some extent by allowing some of our current human limitations to be overcome so you could continue to grow and expand your mind beyond its current plateau that we reach sort of around 20 or whatever when you're sort of physical and mental probably even if you could continue to grow for hundreds of but eventually I think there will be a reduction in that type of profound novelty but I think there's a different sense of objective interest in us where the level could remain high so I
call it a sort of kaleidoscopic sense of interest in us so if you take a snapshot of the average person's life right now maybe right now somebody is doing their dishes like how objective the interesting is that you know are they taking their socks off because they're about to go into
Bad like okay so like from a sort of experiential point of view it's not so m...
these you tell us would instead an average snapshot of their conscious life might be they are
you know participating in the enactment of some sort of super Shakespeare multimodal drama that is unfolding on a civilisation way it's kale when their emotional sensibilities have been heightened by these newer technologies and new art forms that we can't even conceive of that are like to to to access as music is to a dog or something like they kind of we and they're participating
“being fully entranced in this act of shared creation like maybe that's what the average”
conscious moment looks like that could do in some sense be far more interesting than the average snapshot of a current human life so and then there's no reason why that would have to stop it's
like a kaleidoscope or in some sense it's always the same but in another sense the patterns are
always changing and can remain sort of have an unlimited level of fascination could it be that these you know let's say we're talking about thousands of years in the future we can create simulations could it be that life is still boring that that's why they're creating these simulations so that they can maybe be in the simulation themselves if that makes sense yeah so one thing you might do in this condition of the salt world is to create artificial scarcity
“which could take different forms because amongst the human values that we might want to realize”
some of these are sort of comfort and pleasure and fascinated aesthetic experiences but then also
sometimes we like activity maybe on striving and having to exercise our own skills
so if you think those things are intrinsically valuable you could create opportunities for this in the salt world by creating as it were pockets within the salt world where they're remain constraints and you could have if there's no natural purpose nothing we really need to do you could create artificial purpose we do this already in today's world sometimes when we decide to play a game so take take the game off golf you might say okay there is no real natural
purpose I don't really need the ball to go into the sequence of 18 holes but I'm going to set myself this goal arbitrarily but I'm now I'm going to make myself want to do this and then once I have set myself this goal now I have a purpose artificial purpose but nevertheless which enables activity of playing golf where I have to exert my skills and like my visual capabilities in my motor and my concentration and I not maybe you think this activity of golf playing is valuable
you set yourself this artificial goal that that could be generalized so with games you set yourself some artificial goal moreover you can impose artificial constraints like rules of the game so you sort of make it part of the goal not just that the certain outcome is achieved but that it is achieved only using certain permitted means and not other means so in the goals you can just pick
“up the ball and carrying it right you have to use this very inconvenient method of hitting it”
with a golf club similarly in a salt world you could say well I set myself this artificial goal and that moreover I make it part of the goal that I want to achieve it using only my own human capabilities there is this technical shortcut I could take this you know new topic drug that would make me so smart that I could just see the solution immediately or or enhance my body so I could sort of run ten times faster but I'm not going to do that for this purpose I'm going to
restrict myself that's the only way to achieve this goal that I've set myself this artificial goal because it includes it constraints and it might well be that that would be an important part of what these utopians which choose to do in creative ways to develop these increasingly complex and beautiful forms of of game playing where they select artificial constraints on their activities precisely in order to give opportunity for them to exert their agency and striving yeah I'm sure
like that's just something like naturally as humans we would just be craving and so I feel like there'd be a lot of that going on if we were in a salt world so how do you think entrepreneurship will change in this world you mentioned that there might be still some jobs in a in a salt world so what are those jobs look like and how do you think entrepreneurs or what do you think will happen with entrepreneurs or it will there be any chance to kind of innovate in a world like this
well so the kinds of jobs that might remain I think are primarily ones where the consumer
Cares not just about the product or the service but about how
the product the service was produced and who produced it so sometimes we already do this you
“that might be some little trinket that maybe the some consumers are willing to pay extra for if”
it were handmade or are made maybe by like indigenous people or exhibiting their tradition even if like an equally good object in terms of its objective characteristics could be made by a sweatshop somewhere like in Indonesia like we might just pay extra for having it made in a certain way so to the extent that consumers have those preferences for something to be made by human hand that could create a continuing demand for some forms of human labor even at arbitrar levels of
technology other domains where we might see this is say in in athletics you might sort of just prefer to watch human sprinters compete or human wrestlers wrestle even if robots could like run faster or wrestle better like that might so I keep thinking sports is not going to go away that's what I keep thinking yeah it could not stand and I might be an important spiritual realm like you might prefer to have your wedding officiated by human priest rather than like a robot
priest even if the robot could say the same words and etc so those would be cases and that might be sort of legally constrained occupations where like a legislator or attorney or public notary or like where whatever for a diverse reason the legal system lags and sort of crates because a human morality might be just automation even but but in terms of entrepreneurship I think that
ultimately it would be done much more efficiently by AI entrepreneurs and it would be more
a form of game playing entrepreneurship that would remain so like you could create games in which
“entrepreneurial activities are what you need to succeed in the game I mean like kind of supermonopoly”
and and that that that that that that could be a way for these he took us to sort of exercise their entrepreneurial models but I wouldn't be any economic need for it the AI could find and think of the new things the new products the new services the new companies to start better and more efficiently than we humans could how far in the future do you think a solved world could be well I mean this is one of the 64 thousand dollar questions in some sense I mean I'm I'm I'm impressed by the speed of
development in AI currently and I think we are in a situation now where we can't confidently exclude even very short timelines so self like a few years or something it could well take much longer but we can't be confident that something like this couldn't happen within a few years
“it might be that you know maybe as we're speaking somewhere in some lab somebody gets this”
great breakthrough idea that just on hobbles the current models to enable basically the same
structure now to perform much bigger and then these on hobald models might then apply their greater level of capabilities to making themselves even better and something like that could happen within the next few years although it's also possible that if it does not happen within if it does not happen within say the next five years or so then timeline starts to stretch out because one of the things that has produced these dramatic improvements in AI capabilities that we've seen
over the past ten years is the enormous growth in compute power used to train and operate frontier AI models but that rapid rate of compute growth can't continue indefinitely the scale of investments it used to be ten years ago some random academic could run like a cutting
edge AI on on their office desktop computer right now we are talking multi-billion dollar data
centers open AI's current product is star gate right which in its first phase involves a hundred billion dollar a data center and then to be expanded to a five hundred billion dollar so you could go bigger than that you could have a trillion dollar right but at some point you start to really run into hard limits in terms of how much just more ammonia can spend on it so at that point things will start to slow down in terms of the growth of hardware then you sort of fall back
on a slower rate of growth in hardware as we sort of develop better chip manufacturing technology
Which happens a bit slower and algorithmic advances which is the other big dr...
seen but it's only one part of it so if the hardware growth starts to slow down and maybe a lot of the low hanging fruits on algorithmic inventions have already been discovered at that point then
“if we haven't hit AI by that point then I think we will eventually still reach there but then”
the timescale starts to stretch out and we might have to do more sort of basic styles on how the human brain works or something in that scenario before we get there but I think there is a good chance that we are sort of that the current paradigm plus some small to medium-sized innovations on top of it might be sufficient to sort of unlock AI. Now I want to be respectful for your time
because I know that we're a little bit over and my last question to you is first of all I can't
believe that you're saying that this solved world could happen in a few years potentially so yeah let's let's be careful yeah I think we can't rule that because then so what could happen really yeah initially what could happen is we get to maybe ADI which I think will relatively quickly lead to super intelligence and then super intelligence I think will rapidly advance further technologies that could then lead to a solved world but there might be some further delays of a
few years like after super intelligence maybe it will still take it a few years to get to some something approximately. And just because we didn't cover what is the difference between
super intelligence and AGI? Well ADI just means like general forms of AI that's maybe roughly human
levels. Let's take off ADI one definition is AI that can do any job that a remote human worker can do. So anything that sort of you hire somebody remotely who operates through email and Google locks and zoom like if you could have an AI that can do anything that like any human can do in that
“respect that I think would count as ADI you know maybe you want to throw in the ability to control”
robotics but I think that would be enough that is not automatically the same as super intelligence would be something that sort of radically outstrips humans in all cognitive fields that can do much better you know research history theory and in inventing new piano concertos and like
envisaging political campaigns and doing all these other things better than humans much better.
So once you're saying we create super intelligence then things just can happen super rapidly. Yeah that I think so. And I think it's a separate question but also possibly once we have full ADI super intelligence might be quite close on the heels of that. So my last question to you is for everybody tuning in right now like we're at a really crazy point in the world and a lot of us are not like you were not like in you know in it like
like really paying attention or really in this field right what is your recommendation in terms of
“how we should respond to everything going on right now like what is the best thing that we can”
do as entrepreneurs as people who care about their career hopefully things don't change too fast do you know but what I guess what is your recommendation to us in terms of how we move forward in this world today given everything that's going on. Yeah I think it depends a little bit on sort of how how you are situated and I think there are different opportunities for different people's I mean obviously if you're like a technical person working in an AI lab you have one
set of opportunities. If you're like an investor you have another set of opportunities and then then there I guess opportunities that every human has just by virtue of being alive at this time in history. I would say a few different things like in terms of as we are thinking of ourselves as economic actors I think like probably being an early adopter of these AI tools is helpful to sort of get the sense for what they can do and what they cannot do and utilizing them as
that gradually become more capable. I think to the extent that you have assets like maybe trying to have some exposure to the AI and semiconductor sector could be like a hedge. It gets tricky if you're like asking about younger children so like what would be good advice for like a 10 or 11 year old today because it's it's possible that by the time they are old enough to enter the labor market the world could have changed so much that there will no longer be any need for human labor but
it might also not happen right so if it takes a bit longer you don't want to end up in a situation we're suddenly now it's time to earn a living and you didn't bother to learn any skills so you want to sort of hedge your bed a little bit but I would say also make sure to enjoy your life if you're a child now you know not maybe only going to be a child once and don't spend all your
Childhood just preparing for a future that might never actually be relevant t...
you know. And then I would say so if things go well these people who live
“in decades from might look back on the current time and just shutter in horror at how we live now”
and hopefully their lives would be so much better there is one respect though in which we have something that they might not have which is the opportunity to make a positive difference to the world a kind of purpose so right now there is so much need in the world so much suffering and poverty and injustice and just problems that really need to be solved not just artificial purpose that somebody makes up for the sake of playing a game but like actual real desperate need so
if you think having purpose is an intrinsically valuable part of human existence now is the golden age for purpose right like knock ourselves out right now like now you have all these opportunities of ways that you might help in the big picture to steer the future of humanity with AI or in
“the community or in your family or for your friends but like if you want to try to actually help”
make the world better now is really the golden age for that and then hopefully if things go well later all the problems will already have been solved there is a remain problems maybe the machines will be just way better at solving them and that we we we won't be needed anymore but for now we certainly are needed and so take advantage of that and try to try to do something to make the world better Wow we could be the last generation that has any purpose which is just so many different
yeah of that sort of that sort of stark urgent these screaming the morally important type it it could be the case so um so I would say that yeah those are the things I would say and then
I guess finally just kind of be aware like it would be sad if you imagine your grandchildren
you know in in your case maybe a long like 80 years from now or something but for others maybe sooner but they sitting on your lap when asking like so what was it like to be alive back in
“2025 when one this thing was happening when like AI was being born and you have to answer oh I”
didn't really pay attention I was too caught up with these other trivialities of my daily existence I didn't even really notice it that that to kind of be sad if you were alive in this special time that shapes the future for millions of years and and you didn't even sort of pay attention to it that seems like a bit of a missed opportunity so I started from every thing else like taking care of your own and your family and trying to make some positive
contribution to the world just kind of you know taking it in like this if this is right this is a very special point in in history to to be alive and to exist right now is quite remarkable yeah so beautiful I feel like this is such an awesome way to end the interview Nick you
are so incredible thank you so much for your time today working everybody learn more about you
read some of your books or where's the best place to find you Nick Boston dot com my website and books and papers and everything else is linked from there yeah his books are so interesting guys super intelligence deep you topia very very good stuff Nick thank you so much for your time today I'll put all your links in the show notes and really enjoy this conversation thank you I'll enjoy talking to you


