Welcome to the Making Sense Podcast, this is Sam Harris.
Just to note to say that if you're hearing this, you're not currently on our subscriber
“feed, and only be hearing the first part of this conversation.”
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org. We don't run ads on the podcast, and therefore it's made possible entirely for the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one.
I am here with Nicholas Christakis, Nicholas, thanks for joining me again. Sam, it's so good to see you again. Yeah, great to see you.
Yeah, we don't see each other in person enough or even on the internet enough, but I always
love talking to you. So let's just jump right into it. I'll remind people you are the director of the Human Nature Lab at Yale. You are at both an MD and a sociologist and have studied many interesting topics related to, I guess, how human beings and now technology affect one another, and we have too much
to talk about.
“I think I want to start with the question of, I guess I just want your post-mortem on”
the present. This last decade, what has technology specifically information technology done to us? Yeah, so I think we are going to see the other side of our present dilemma. I think it is going to take half a generation to really be on the other side of it, because I think we've dug ourselves into quite a whole.
I share the opinion I suspect with you and certainly with people like John Hight and others that the kind of technology that we've invented or the turns that our technology has taken, our communication technology has taken in the last 10 years have so far been quite harmful to us. We're other benefits they've had, I think they've contributed to this polarization,
they've contributed to Anomi, they've contributed to some of the mental health crises we've had. I think they've also led to a surveillance state, not just abroad, but shockingly in our own country, where these technologies are being used in ways that I would regard as, you know, quasi-to-tellitarian, or at least pose the threat of that.
I had a friend long ago, I still haven't, he's still a friend of mine. And here's a go. He told me he didn't use credit cards and, you know, he refused to get a cell phone, and he wanted, you know, he was trying to be off the grid because he didn't want to be surveyed.
And I thought he was like a lot of, not yet now, you know, worry that, like, my every move is being tracked by someone.
“So if, to the extent that you are arguing, and I think you are, that some of what”
ales as a present is due to some of these communication technologies and the ways they've been grafted onto very fundamental human desires and exploit those desires, to the extent that we grow as a society, to cope with those threats, I think we will look back at this period as, as just that, one in which we, you know, yield it to, and we're adversely affected
by, and ultimately, let's say, overcame some of these threats.
Not to similar, you and I remember when you couldn't swim in the Boston Harbor, you know, the Charles was polluted, the era was polluted, and we sort of cleaned everything up in some sense. We will clean everything up in that way, but it'll take some time. So what is your personal engagement with social media these days?
How do you use it? If you use it? Well, I, I got very disgusted with Twitter, and I, um, I didn't abandon my account because I didn't want anyone to squat on it, and I found that I was, the reason I went to Twitter was that I used it as a source of information, like it was like a access to experts in
a way that was, you know, really, really helpful to me, and I, and I found that a lot of the knowledge that I was acquiring, and I was acquiring, I curated a list of people with diverse expertise and beliefs, and followed them, and I really enjoyed it. And then I felt like I had to, wasn't just a program for me to take from the Commons, I had to give to the Commons.
So I tried to generate content that would, you know, reflect my expertise or my ideas and be useful to others. But in the last few years, I found it to be just incredibly toxic, and my, the fee became even when I just tried to follow only the, my own people, became full of garbage, a lot of trolling, a lot of mostly far right conspiracy theories, also some left craziness, of
course, too. I just couldn't use it anymore.
So I, I basically now, I, I stopped using Twitter, and I moved to Blue Sky a couple of
years ago, where I get mostly, I mean, the politics are another issue, but in terms of the science, you know, I follow about 600 accounts, mostly scientists, and I did good scientific content, and I had, you know, reasonable interactions, I have a tenth of the followers I used to have, and that's fine. Facebook, I don't really use LinkedIn, I don't really use, I just started a YouTube channel
on that, trying to advance the public understanding of science called for the love of science, but I don't really know how to use YouTube, so we're just doing videos once a week.
I'm really, just basically Blue Sky for science, that's all I'm doing nowadays.
Well, I want to get back to the reputation of science and to your efforts on YouTube at the moment, but, um, so just takes, again, social media and what it's doing to us and the, and the toxicity and conspiracism and trolling that you are familiar with, and that everyone listening to this will be familiar with. Do you have any sense of what the remedy is, I mean, you know, my, my personal remedy was
to just delete my Twitter account and to now, you know, only, um, in extremists, look at a Twitter feed just because there's some breaking news that is best captured, you know, there. But even that Sam, do you remember, do you remember that guy who was an expert on military tires?
Did you remember that hole? No. You know, I was sorry, I think it was during, I can't remember if it was when the Ukraine war started, I think, and there was some, some guy who was an expert in the maintenance of military vehicles and he sent a lump on thread out about like how the tires hadn't,
the trucks hadn't been moved around properly, the tires hadn't been rotated, how all the tires were exploding. I had no idea there was such a person and I, I ran his whole thread and I was like, oh, like, that's so interesting. All of that content that expertise is as far as I can tell is gone from two to one.
“You should be initiated by AI Slop or what is, how's it gone?”
Well, first of all, whatever the algorithm is, I don't get that content.
The AI Slop is a serious problem and I, one of my, my family teases me, I'm known to be particularly gullible, and actually my reneration of this is that I'm not stupid and I even gullible. I'm trusting. Yeah.
Yeah. That's a good person, in other words, yeah. Exactly. Exactly. That's my story and I'm sticking with it.
But the thing is, somehow these algorithms figured out that I like to look at like baby elephants initially, I got like real, I think, you know, like needy sea photos of like baby elephants. And then I think the algorithm sort of feeding me Slop, like, you know, like a hippopotamus, a crocodile attacks of baby elephant.
Yeah, yeah, yeah. I guess saved by a rhinoceros. Yeah. Yeah. Exactly.
The army elephant comes and stumps on the, up the road. Well, it's totally, it's all fiction and initially I was like really take it in by this stuff. Or there's a ton of AI Slop that's a problem.
“There's, I mean, it's just it's useless, honestly, to me at least.”
So I mean, I don't, I have nothing particularly good to say about the environment on Twitter right now. And it's a multiplicity, you know, profusion of problems from my perspective. Mm-hmm. Plus, plus I wasn't so happy with all of my, I understand that all of our personal
timing and stuff on Twitter basically belongs to, to acts and could be used to train
AI algorithms and so on. So none of that is appealing to me. Mm-hmm. Well, I think as we're speaking, there's a, um, a lawsuit. I think the first of it's kind against social media companies in California.
You mentioned John Hight, you know, he's been obviously instrumental in bringing awareness to this issue, especially the harm done to teenagers by social media. What is the path forward? You think it's a successful series of lawsuits, a revocation of Section 230, just a, a virtuous cycle of social contagion where we all begin to change our minds at once and influence the norms around using social media, or is it just that AI
Slop itself will provide some cure because every video you see, you do your first question
“here until the end of the world is, you know, is this even real?”
And it will begin to no longer care what's being presented in these non-gate-capped channels. So I have a few things to say about that.
First of all, it's known that as you everyone listening knows that anonymity contributes
to a lot of the problems and, you know, who says why people used to, you know, torture is used to wear masks. You know, and people would be disinhibited when they went to mask balls, for example, you know, these fancy, ask balls and you imagine from hundreds of years ago, you know, that the aristocracy had, you know, it's disinhibiting to hide your,
and this is also why people in mobs behave awfully. They have a kind of practical anonymity while you get dry. It's just sort of well at the process. So I think that humans, of course, behave worse when they're anonymous or pseudonymous. And now I have a hard time arguing.
My problem is that I, I think that any entity where you can't be anonymous, behavior is going to be better. And then they hand, I don't necessarily want to abolish anonymity either, because I think that's a tool for totalitarianism. So I think there will be social media companies which require or where people who use them,
who, which afford people the opportunity to be non-anonymous, and which people that privilege non-anonymous accounts, which I think will help. So I think tools to afford people the option and also to exploit non anonymity will help.
Like the old blue checkmark on Twitter was.
Yeah, yeah, yeah. Yeah. Another thing you said 230, like I struggle with this as well,
because on the one hand, I do think that 230 was crucial actually for the emergence of the internet.
I do think that there is an argument to be made that these social media companies are just carriers, and shouldn't be responsible for their content. On the other hand, I also think, you know, washing their hands of the content entirely doesn't make much sense either. And allows them to sort of wink, wink, and just ignore horrible uses taking place on their platform.
So I actually don't have an answer to that struggle either. But what I do think is going to happen just as you said,
“is I think people, and maybe this will be accelerated by AI and AI slot,”
I think people will learn, and I think ironically, we may have a kind of return to a privileging of reputable sources. Right, you know, we've migrated so far away from, you know, evening news with Dan Rather, kind of, you know, thing to everyone is an expert, and, you know, there's all this kind of good stuff, but also crap online.
I think we may ironically, people may be willing to pay a bit more for reliability. You may not believe it unless you read it in the economist, you know, then you'll believe it. You'd not believe whatever you see otherwise online.
So it may reprivilege, you know, sort of incredibly real voices. You know, I know you've done some research of laid on AI and how it changes, not just human behavior with respect to technology or information sources, but behavior toward one another, right? It alters the mechanics of human cooperation on some level.
Well, you know, we take that strand if you want, but I mean, just generally speaking,
“what are your thoughts about AI and where all of this is headed for us?”
So I want to tell a brief toy story or toy model or toy example of the question you just put, but before I tell that I want to go on a slight digression. And because I struggle a lot as I suspect you do with, you know,
what is happening with the these incredibly powerful tools that are being so rapidly developed in our society.
Because this scene in the movie, Fiddler on the roof, where the protagonist, who's a milkman in the town of Antarctica, you know, around the time of the Russian revolution, just before actually, is a very poor man goes to the town center. And there's a big argument that's going on there. And someone makes something in Reptabia.
He's the character. He says, you're right. And someone makes the opposite point. And he says, you're right, too. And then someone says, Reptabia, they can't both be right.
And he says, you're also right. This is how I feel when I listen to debates by experts on AI. I listen to some computer scientists and some tech billionaires who talk about the amazing promise of AI. And how there will be some bumps. But mostly it's going to be this extraordinary future.
And that to oppose it is to be a lotite. And I think you're right. And then I lean to listen to other incredibly expert computer scientists and actually like, you know, say an exact opposite. We said, you know, I think I was at a new rent with Sam Alman a couple of years ago,
actually, and he said that he thought there was like the 2% human extinction risk from A on. Yeah, I think actually, I think it's higher and coming from him. All right, I think his estimate was higher, but maybe maybe maybe he's recalibrated it in the interest of public relations. But I think he was more like 20% at one point. Yeah, but I mean, that's crazy.
It's just not oddly. Yeah. No, 2% is terrifying. But 20% is psychotic. So you listen to those guys.
And you're like, well, they're also right. Well, they can't both be right. And you know, that's also true. So I have sort of stopped trying to form in my own because I'm not so expert in this area. But I am expert in another area, which is related to this, which is this issue of how AI is going to change human behavior.
And here, just to preface one set of ideas. The kind of toy model that I like to throw out there to sort of help people fix ideas is imagine the manufacturer of an election of an Alexa digital assistant. The manufacturer of a digital assistant is very concerned with a human machine interaction.
You would never buy an Alexa.
“If every time you had to speak to it, you said, you have to say, excuse me, Alexa,”
I'm very sorry to interrupt you. If you don't mind, would you please tell me the weather tomorrow? Right. That would be an absurd level of politeness. You never buy an machine like that.
You expect to be able to say, Alexa, weather and it obediently response. And that's fine until you bring the machine into your home, and your children in speaking to that machine learn to be rude. And then they go to the playground and they are rude to other children. So what we've been studying in my lab is human human interactions in the presence of machines.
And specifically what we've been focusing on is little perturbations in the AI systems in the machine systems that modify how the humans interact with each other.
In fact, what we're working on is not so much super smart AI to replace human...
but dumb AI to supplement human interaction.
“And because the humans are smart, you can think of the AI as a kind of catalyst,”
like platinum in an organic chemistry reaction, that just facilitates the interaction of humans and helps optimize them. And we've done a broad set of experiments that have shown this as possible, that you can improve human collective and individual performance through the thoughtful, you know, injection of AI agents into social systems.
Have you done any research or is there any research on the first point you made,
though that kind of a, you know, a course and instrumental use of AI has bleed through into human relations. And so kids are actually less socially appropriate if they're, they've been barking orders at their bots all day. We haven't looked at that specifically, like that's just an example. I think that work has been done.
“And I think I think that work comports with my sort of like the medical example.”
Well, what would you imagine in the case of humanoid robots? I mean, this is something that honestly I haven't spent that much time visualizing,
but whenever I have spoken about it, I think we can stipulate that we will eventually get out of
the uncanny valley and have robots that, that look, you know, if not perfectly human, you know, in some sense better than human, right? Though we perfect human ways, in some sense, you know, when we want, when we want our AI shaped like that, we'll make it shaped like that. What do you, I've spoken to Paul Bloom about this some years ago in response to the series Westworld.
We looked at that and we thought one piece of philosophy that was accomplished by that series is that it revealed that a place like Westworld probably couldn't exist because you'd really have to be a psychopath to go on vacation
and rape, you know, perfect facsimiles of, you know, human women and girls
and then come home and tell your friends what a good time you had, you know, raping and killing robots that were indistinguishable from humans. And so maybe you could set up a theme park that would act like a bug light for psychopaths in that way, just normal people would not want to have a perfectly, seemingly vertical experience of being a moral monster.
And you imagine some real contamination both of how they felt about themselves and how other people saw them if we did that. So just imagine we get to the place where we have, now we're talking to humanoid robots and making demands upon them. I would imagine that our social graces will come creeping back in.
“I mean, honestly, even just in typing instructions into an LLM,”
I find myself being inappropriately polite, right? I'll use the word "please" and I think that probably costs Sam Altman some number of dollars every time I do it. How is that going to change us? Well, believe it or not, a personal, I'm not 100% sure I know the answer, but I can speculate along with you.
Believe it or not, this also is an old topic. And it actually came up prior to the woods. Well, certainly prior to the modern instantiation of Westworld, after the old movie, there's a book I know it's over 20 years old now, called something like Love and Sex with Robots.
People were speculating about what it would mean in some futuristic world in which we have the capacity to have intimate relations with machines. And there were two schools of thought on this. If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
You'll get access to all full-length episodes of the Making Sense Podcast. The Making Sense Podcast is ad-free and relies entirely on listener support. And you can subscribe now at SamHarris.org.


