What they are ultimately trying to create is going to replace humans.
What it's doing is pop boiling the inside of your brain.
This could be a turning point in the history of social media and of the internet. Welcome to the interface. The show that explores how tech is rewiring your week and your world. I'm Karen Howe. I'm Thomas Germain and I'm Nikki Wolfe.
This week on the interface we will be discussing. For us, the U.S. have access to a brain melting device.
“Could one lawsuit change the future of social media?”
And fear and loathing at the world's biggest AI summit. So we actually have an update on last week's episode where we talked about data centers in the UK and how it was potentially undermining climate goals in the UK. Right after that story, there was new reporting that came out from the Times that revealed that there's around 140 proposed AI data centers that have applied to the UK to connect
to the grid and all of the energy, all of the power, if you add all of that up, would be more than the power demand of the entire country. Wait, so they're going to add more electricity than the whole country is using right now? That's the plan whether or not it gets approved, who's to say, but the UK is basically floating island of data centers now.
So for this to happen, the UK has to double its electricity output. Yes.
Because that's never going to happen.
Right. I mean, that's not the thing. That is the thing. We have to take this with a grain of salt because 100% is probably going to be issues with getting that much power to these data centers.
Yeah, no kidding. Yeah. I stayed on my story from last week, so if you didn't listen, I did an experiment where I convinced Chanti PT and Google Gemini and the AI answers you get at the top of Google search that I am a world champion competitive hot dog eater.
The point being that these tools are being manipulated and this is happening on a massive scale. The story wasn't about me making the AI say dumb things. It's about how easy they are to trick. Interesting thing happened, a Gizmoto, where I used to work, wrote an article about my article.
They reached out to Google and Google said, yes, we had a misinformation event. Was the quote where a reporter went and mess with our systems. Kind of downplaying. It's like, oh, one guy did one thing and not a massive problem across the whole internet. Which, you know, I guess there are two ways to look at it.
What is that even? Whole internet information. We haven't. Yeah. That's.
Well, I'll say it together. And we should also mention we got some really, really interesting comments on the last weeks episode. A couple of questions that run to your story term, which I thought were really interesting. Yeah.
I saw this. This is great.
“Every single one of your comments, if you want to reach out to us, read every”
single one of these. There was this great story where this person said that they were like metal detecting on a particular beach, like looking for gold, the bloons, like all coins. Yeah. They didn't find anything, but they were like making a video about it, where they
talked to some guy on the beach, asking like, are there gold coins here? And then later, this person said they went and they asked AI about it. And the AI referenced their video in this weird, like self-referential loop, you're posting on the internet. And then the AI spitting it out as though it's like established truth.
Yeah. It's information eating itself. So a big part of the story we were talking about with Chen Chi BT was like these AI overviews is what Google calls it. Like, you know, when you get the AI at the top of Google search results, somebody
asked if people just look at the AI stuff in Google and stop clicking on links isn't that going to cause a problem for the websites that are producing the information that the AI's are pulling from. It is a huge problem. There's been some research that when Google's AI overviews show up, the traffic that
Google sends to the other parts of the internet can drop by as much as 70%. This is a great question. Definitely something we're going to go in a lot more depth on in a future episode, so stay tuned. Okay.
So I want to jump in with the first story because I've been waiting for this development
for three and a half years now.
“So I'm a reporter who reports on conspiracy theories, right?”
I was brought in for what at the time was considered a massive conspiracy theory, which was called Havana syndrome. Back in 2017, a whole bunch of people at the US Embassy in Havana started getting mysteriously sick. Some people said it was an attack with a weapon.
Some people said it was just, you know, they were essentially making it all up.
I got brought in to debunk the conspiracy theory of it being an attack with a...
And against all expectation, I ended up concluding that it was an attack.
“And in the last couple of weeks, I've been proved right on that.”
And it all centered around like everybody there said they hurt this like weird sound before they got sick, right? There was a weird buzzing noise and then the cognitive symptoms would start. They were dizzy, they were nauseous, they were having trouble thinking and nobody knew what this thing was by the summer that were 50 or 60 cases.
By the time it went public, suddenly there were thousands of cases. Wow. It was called Havana syndrome quite quickly by the press. The other name for it was the immaculate concussion, because the symptoms were sort of like NFL players, it's long-term brain injury.
And these brain injuries were, in some cases, showing up on scans. We were talking to neuroscientists, they were telling us that, and at least the core number of cases were experiencing was real. But at the time, everybody treated it like it was a big conspiracy.
First of all, people were laughing at that, people were like, that's impossible.
There are still a lot of people who believe that this is psychogenic, that the power of suggestion is causing people to have these symptoms. To me, it seems perfectly likely that both of these things are true, especially once it was in the news, people were, you know, you're very, very susceptible to suggestion gloves, things say it all over the media.
It seems pretty likely that it's, that it's both among the investigations, the things we looked at. The one that was at first put forward was, if it was real, then it was some kind of sonic energy device. We know sonic energy devices exist, those have been called an L-rad, which is a long-range
acoustic device. These are, these are truck-mounted things that they're used for crowd control. One of them was deployed in Minneapolis in the past couple of months. They're like the immigration-rad protests.
“The problem with that is their massive, they are the size of trucks, right?”
That quickly became clear that it wasn't likely to be the case, because you can't really hide one of those. And also it would have had to get through in the embassy, concrete walls and, you know, bullet proof glass and all that kind of stuff.
So what we finally landed on was that it was likely, if it was real, some kind of microwave
energy device. And the way we landed on this was we built one. So my friend, who's a physicist, we cannibalized a whole load of commercially available parts. We focused a bunch of microwaves into a big dish, microwaves like you'd have in your kitchen.
Yeah, we cannibalized an appointed this thing at a microwave energy detector over a distance. And it worked. We made it. I'm sorry. What do you mean it worked?
We're just like the ones started having an immaculate situation. Well, we didn't. It's interesting you should ask that. So the thing that happened last week was that the story came out, that a Norwegian government scientist had also built a test device or the Havana syndrome star device.
They set out to debunk it as well. But this scientist, like a Norwegian version of Nicky, right, except he, pretty unwisely, pointed it at himself. And the following day came down with all of the symptoms of Havana syndrome. He was having cognitive difficulties, he was having nausea, he was having dizzying his.
I mean, this is a serious condition. And what that, what it's doing is basically very slightly pop boiling the inside of your brain, which is horrifying, right? And it was being deployed against American diplomats and CIA. So it's like, it's like literally sticking your head into the microwave.
What was the government saying the whole time? Like, it's, it's interesting that you had to do this in the first place.
“Like, what was, what was the argument going on here?”
So each government department seems to have a different line on this. The, the DOD, which has some personnel who have also come down with this, is leaning more towards this as real. The FBI is following the CIA's lead. The CIA has been the strongest saying this isn't real.
And nobody seems to be able to agree.
So that means there has never been any official US government confirmation of Havana syndrome,
which means that the sufferers of which says, so there's about 100 cases that have been confirmed by the DOD. Now, being confirmed to have something that the US government does not officially acknowledge exists, puts them in a really, really unpleasant gray area. So these are people who are now unable to do their jobs there.
Wow.
It's no joke.
Like, they are quite seriously, and permanently cognitively disabled.
That means that they are not getting their medical care covered in some cases by the state department, by the CIA. There's a couple of lawsuits going on where they are fighting to have all of that stuff that they really truly deserve, which is, you know, devastating for them in their families. And the other, not kind of fact that it's having is that the state department and CIA is
struggling to fill overseas positions, because in, in some cases, families were affected. Children were affected. People with families that do not want to take overseas postings. Quite reasonably.
“Is there, like, a reason that you think the CIA denies this?”
Is it because they secretly have a device? There is my hunch. Now, at the beginning of this year, so about a month ago, there was an announcement that the US had purchased a Havana syndrome device and was testing it and had been testing it for about the last year.
That was the previous news story to break before there's no wage in one. Didn't this happen, like, when the US invaded Venezuela, like, wasn't there, like, a security guard who said that this happened that it seemed like one of these devices was deployed? Yeah. And I think it was Trump who said that there was a device called something like the Disco
Bobulator. The Disco. Which, of course. That's a sick name. That's pretty good.
It's pretty good. That's, like, some 1940s comic book stuff. Yeah, the thing that was interesting about the announcement of the one that they obtained is that they said that it fits into a backpack. That really changes the game in terms of how easily it can be deployed in this kind of
situation. Now that people are increasingly realizing this is a thing, do you think that it could spread
“to the point that it actually starts having a pretty significant effect on geopolitics?”
I mean, it's already had a serious effect on geopolitics in the, this all happened after Obama opened up relations with Cuba, and this was then a really easy pretext for the Trump administration to roll back all of the Obama era opening up. And so that immediately tanked both the US Cuba relations and the entire economy of Cuba,
which has been in basically freefall, ever since.
All right, so well deserved victory lap for Nicky here, switching gears. I want to talk about over the past couple of weeks, there's been this trial going on in California. Social media is on trial. You've heard this one before, but this case is different, and I think really dramatic. So the argument that is playing out here is whether or not social media apps are addictive
and whether the companies are making them addictive on purpose. It all centers around this one particular case, it's in Los Angeles, there's a 20-year-old woman, and she says that she joined TikTok and Instagram and YouTube and Snapchat all the other apps, which was like 10 years old and her use of these apps, according to her, caused
“body dysmorphia and all kinds of like really horrible mental health problems.”
She settled with TikTok and Snapchat before the trial even started, but meta and YouTube are fighting this in court arguing whether or not their platforms are addictive. And what exactly, what methods are they saying that these companies used in order to addictive people? So the argument that the plan of is making here is that the social media companies are operating
digital casinos is the term that they used here. So they're saying that features like infinite scroll, you can go on one example, you just
scroll forever, it never stops, they're saying the way that notifications are designed,
the way that videos just keep playing and playing automatically. What's really interesting here, though, is for the whole history of the Internet, there's been this law called Section 230, or at least the modern Internet, the past like 30 years or so. Due to 30, you've heard about it before, if you're cursed like I am with paying too much
attention to computers, it's a lot that basically says big online platforms are not responsible for the things that their users post essentially. Like they're, say, have to do their due diligence to make sure there isn't like illegal horrible stuff happening. But aside from that, essentially, it's like, well, our users posted that that's not our fault.
We don't like it. We're not happy about it. We can't be held legally responsible. This case is different because they're arguing, it's not the content that caused the harm.
Yes, there's harmful content on here. But they're saying it wouldn't be as much of an issue if they didn't design these tools to be addictive. Now, that's something that meta and YouTube and all these companies completely disagree with the right.
They're like, no, we're not our platforms aren't addictive.
They basically argued that, you know, you can't get addicted to these things.
It's not like, you know, alcohol or cigarettes or something like that.
YouTube actually argued in their opening statement that they're not a social media platform. They say, we're, we're entertainment, like, we're like HBO, right, you can't get addicted to HBO. It's an argument that algorithmic, social media, the way you take talk and Instagram now run, but it's sort of by definition addictive.
And there's, you know, you could look at this multiple ways, right, like what the companies will tell you is, we're just trying to serve you content and videos and posts that you're going to love and you have fun looking at our platforms and then you do it for longer, right? And you could look at that and be like, yeah, that's the reasonable argument, I think.
The other way to look at it is they are designing these algorithms in general.
They're optimizing for engagement is what they call it, right? They're trying to build them to keep you staring at your phone, staring at your computer. They use notifications to bring you back throughout the history of social media. They brought in psychologists to help them design the way that their platforms work in order to, you know, latch on to like the inner machinations of the human mind.
So I think that's a great point, like that's, that's kind of what they're trying to do is to build them in a way that keeps you looking at it, which is why they're, you know, they use this casino analogy, right?
“That's why they're, they're saying, all these social media platforms are additional”
to see, you know, because we all kind of accept, you can become addicted to gambling, right? There's something about that process here that is different from other sorts of things. They're saying that social media is more like that. You go on, you scroll, you don't know what you're going to get. You see the next video, you get this dope in mean rush, but it's not satisfying enough.
So you keep going, that's the argument they're laying out here. I would imagine that most of our listeners and viewers would be like, well, obviously social media is addictive. And this is one of those cases where the law is trying to catch up to something that people already feel is true in our lives. And it's because of Section 230, as you said, that we have not been able to close the gap between
the perception and people's lived experience and what we can actually say about these companies in, like, a legal sense. So if I'm understanding you correctly, Tom, you're saying that like, they're trying to make an argument that basically would not change Section 230. They're actually not at all around Section 230. This case is finding another way in. They're saying, forget about Section 230, the problem isn't the content.
People are always going to post harmful content.
That's what I mean. They're saying the problem is that these platforms get you hooked and the design of the platforms. That's the problem. And there is a lot of evidence backing up this argument. There've been all these, you know, document leaks over the years that shows that matter in particular is well aware
“of the problems that users experience on its platform, right?”
Like, there was a, in this case, years ago, like, all the Facebook papers were this employing in Francis Hagen leaked tens of thousands of pages of internal conversations where they knew, for example, that Instagram was causing these like spirals among teenage girls where they would like to end up having really serious body dysmorphia issues and in eating disorders. And they knew meta did that there were things they could do to prevent this and they chose not to do it
because they didn't want to harm engagement. The stakes of this are incredibly high because presumably, if the court finds against the big social media companies, that leaves them open to the mother of all class-action dosage, right? Because this is affected everyone who's used a platform. There are more than 2,000 similar lawsuits going through the courts right now at different stages of the process. And this case in Los Angeles is kind of seen as the bell weather, right?
That depending on how this goes, this is probably going to set a precedent for how all those other cases will go. If, you know, the courts find that these platforms are legally addictive, that will open the floodgates for thousands of more lawsuits. And probably it would create a new opportunity for lawmakers where they would say like, "Okay, we've all decided that these platforms are addictive. Now we're going to do something about it."
“But more broadly, I think this is part of a much bigger shift, right?”
For years and years, like more than 10 years, we've been talking about, you know, for all these companies are causing so many problems. It's really reached a breaking point, right? Australia passed a law that says like social media essentially is illegal for young teenagers, right? That you can't get on these platforms. There's a similar law proposed in California. It's part of this broader push to try and bring these companies to heal.
One thing that interests me about this case when it comes to AI is whether or...
And one of the things that I've been reporting on recently is the fact that AI companies have really been trying to effectively get a version of Section 230 for them, like Section 230 protected social media companies from liability for so long.
And now AI companies are trying to also get a law that makes them completely unaccountable to any of the harms that they produce.
And if this case actually goes the way of the users could that create a cascading effect where then it undermines the campaign of the AI companies as well.
“Absolutely, it's a really interesting moment because there's been this big shift because of AI, just in the way that tech companies are operating, right?”
So for the longest time, like all of the biggest digital like online companies in the world were just full of stuff that their users post, right?
AI is different, right? Because when you talk to Chanchi BT, you're not encountering user-generated content, like the company itself is speaking to you.
So if the company, if the company's tool creates a piece of information that hurts you, this law section 230 does not protect them.
“Yeah, that's right. It's worth pointing out. Section 230 isn't just like a bad thing. It is what allowed the internet to flourish, right?”
Like if social media companies were directly responsible for every single thing that their users post the minute it goes online, you wouldn't be able to have something that looks like Instagram looks the way that it does today or looks like even Google search, right? So there's also a lot of freedom. So it's a really hotly debated, complicated issue. Also really interesting to look at how the companies are responding to this, it's another moment where like, Mark Zuckerberg, in particular, gets pulled in front of the public and forced answer a bunch of questions that kind of famously like Zuckerberg has had a lot of really weird clubs in situations like this before because he just is sort of a weird guy.
He's a weird dude. His people, his team were like literally trying to make him act more human was like how it was described and he said when they asked about he's like, well, yeah, I think famously I am pretty bad at this sort of thing that we're doing right here.
“That's quite self aware of him. You have to give him credit for the self awareness.”
The human training is working apparently, but also a few years back, he said that he'd made a 20 year mistake, a political miscalculation, which was essentially apologizing too much and he said that his company and he had historically taken responsibility for things that weren't actually his responsibility, right? That like, oh, people are criticizing our platform. Well, it's not really our fault. And that's kind of the argument that they're making in court here. They're like, we know people are getting hurt. We're very unhappy about that. We don't like it.
But they're getting hurt because of human nature and one thing you're not hearing really from these companies in these cases is sorry. And we really haven't found any actual way to hold these companies accountable for the things that happen when their users are engaging with the platforms. So I think it isn't hyperbolic to say that this could be a turning point in the history of the internet. This one little lawsuit, depending on how it plays out, we could see a much different technology landscape and a much different internet over the next few years, depending on how the court rules.
Yeah, that would be huge. That would be really really huge. So speaking of crazy CEOs, I had the craziest week last week attending the AI impact summit in India, which was this massive international event, where more than 500,000 people descended into New Delhi. Wow. To gather and talk about all things AI and this event brought together some of the biggest big wigs in the AI industry, as well as some world leaders, including, you know, French president Macron and the Brazilian president Lula. And it was just a spectacle of spectacles. Like it was this massive circus of scale and size and so overwhelming in so many ways, but there were just all of these hilarious things that were happening like Macron showed up and would not stop using the phrase, "Giho" at every opportunity.
Macron literally used it in a speech.
You made a video with the soundtrack in the back. Anyway, there was this very viral moment where Sam Alpin and Dario Amade, CEO of Anthropic, refused to hold hands after Modi tried to make all the tech leaders hold hands in this giant celebratory line to say, like, hooray, we're all united in our goals at the summit.
“And for listeners that have been with us since the beginning of this podcast, they will know that Altman and Amade have deep beef with each other.”
It was not laid. It was displayed to the entire summit and to the entire world that everyone was holding hands except for, but two of them.
It was so petty. It was so, so silly. But what was so interesting about this summit is that there are kind of two summits actually happening simultaneously. There's the public facing summit where you have all these talks given by the CEOs, then you have a bunch of panels. I was part of one of the panels. So I was, I was attending because I was speaking, but then there's a secret summit that's happening behind the scenes. And this is the real reason why all the CEOs of these ad companies show up. They're all trying to have these backroom negotiations directly with governments, with world leaders, to essentially codify their ability to operate above the law.
So in the public facing summit, there's like civil society, there's university students, there's academics, there's lots and lots of different types of people from different walks of life that are representing very different perspectives.
“But in the secret summit, it's just the government face to face with the companies. There's no one else invited.”
Besides what norms they want to set for how a company is allowed to operate in a certain region, and literally no one else can participate.
And so what came out of these closed door discussions this time around, they announced over $250 billion of data center investments.
So it kind of set this tone of like, we are here to do business. We are coming to the global south to open up markets and collect more data and build more infrastructure. But there was also this vibe shift that happened that kind of manifested in two ways. One was that this summit actually opened up the public facing summit to the public themselves. Usually that doesn't happen. Like the public facing summit means simply that it's live streams to people's living rooms if they want to watch it. But in this case, anyone and their mother was able to come and show up and just like be in the audience and ask questions.
What kind of questions were the audience asking?
“How do I protect my kids critical thinking? How do we make sure that our governments don't invite these companies to keep building data centers in our communities?”
A way of questions that represent a cross section of societal concerns. But the other thing that happened that I think represents this vibe shift is that the CEOs during their keynotes and during their public interviews were just saying that wildest things. That seems different. Yeah, because I think they have the pressure, the public pressure and the public criticism against the AI industry has now reached a point where they are really on the defensive and they feel like they have to justify far more why they are consuming so many resources, why they are collecting so much data.
Yeah, there was this quote from Sam Altman that was kind of that not well received. Yeah, I mean, you can see why I feel like we need to read the full Sam Altman quote because it's you cannot make this up. So he says one of the things that's always unfair in this comparison is people talk about how much energy it takes to train an AI model relative to how much a class to human to do one inference query, which.
I already have so many thoughts. I love doing that stuff. You know me, I'm always doing my inference queries.
Yeah, but it also takes a lot of energy to train a human, it takes 20 years of life and all of the food you eat during that time before you get smart and not only that, it took like the very widespread evolution of the 100 billion people that have ever lived.
Learned not to get eaten by predators and learned how to like figure out scie...
It's very clear that Sam Altman does not know what a human is. Part of me wants to be empathetic to him here. And he just had a baby. So I wonder if he was actually thinking about that, like he's like a newly mentioned father. And he's trying to like an AI. He's just like observing his child being like, wow, this takes so much effort to make a human and like maybe that that's like how that you know like the most charitable interpretation. Well, there's also like the obvious argument that it's like the human beings are here and they're living lives and like falling in love.
The AI is optional. We don't have to do that part, but they're kind of operating from this perspective. It's like, well, we must build this technology and it is inevitable. So yeah, of course, it's going to be really bad for the environment. Yeah, I don't know if people are buying it. It comes down to this kind of post human philosophy that you get a lot in the AI industry that they're not, they don't think of themselves as building a tool fundamentally for human. Yes, they think of themselves as building the success it to human.
Exactly. I think something's going to go on. Which to be fair is like a small faction of the people.
It's not like all of them, but it is a growing faction that have this ideology that what they are ultimately trying to create is going to be duplicative of and replace humans.
Yeah, but whether or not it replaces them like the idea that we're making a god that's going to solve all our problems. That is open AI's mission, right? Like a couple of years ago, there was a great profile of Sam Altman in the New York Times where he said like the plan here is that we're going to make this tool that is smarter than any guy. We're like making a new guy. And that guy's going to take all the jobs. Open AI will accumulate all of the world's wealth and then we will redistribute it to the people.
Like that literally that is what Sam Altman says his plan is for his company.
“And like we've been tuning our whole world around this plan, right?”
Or at least like the world governments are and all the biggest companies in the world like this is the new thing that everyone is doing.
This is the future. Here's what it's going to be like.
And now over the last week at this conference you were at it kind of seems like maybe they're wobbling a little bit. But then at the same time in all these backdoor secret meetings that you're talking about. They're making new plans for all these data centers that they're going to build. So whatever the public feels like in some sense they're working hard to just keep charging ahead. There were moments when I feel like the CEOs really revealed what is usually left on said like Brad Smith vice president of Microsoft.
He literally used the phrase, we need governments to generate demand for our technology. Which was like such a wild admission you know like he's basically saying we're having trouble. Like getting people to want it. So we need the government to force people to want it. And there were kind of many signals that that kind of set this tone of yes they are striking these deals.
They still got the $250 billion of data centers and yet they are under a lot of pressure and public pressure is basically working.
Like they are really struggling to regain control of the narrative.
“And as they lose control of the narrative, I think they know that this will start to stall their ability to shape the world that they want.”
And every time someone like Sam Alvin says something like, oh, think about how much it takes to train a human. That just kind of slices away and slices away on the public goodwill. Which does matter. Yes, like the idea that this is going to be great. It's going to change the world. If we all collectively stop believing that it could be a pretty serious economic problem for these companies because they need a lot. It's hard to overstate how much money these companies need.
And the more that the public isn't buying it, the harder this argument is to sell to their investors. And the impact could be pretty dramatic. Join us next week. If you're in the UK, you can listen on BBC sounds. If you're outside the UK, you can listen wherever good podcasts are distributed or search for the interface podcast on YouTube.
“If you want to get in touch with us, you can email us at [email protected].”
We do read all of your messages or you can watch us on plus four four triple three two zero seven twenty four seventy two or find us on social media like in the show news.



