[MUSIC]
Welcome back to Behind the Bastards, a podcast about the very worst people in all of history.
And this week actually our bastard isn't people exactly, although people are still at the center of it. But to talk about that, potentially non-human bastard, I'd like to bring on someone who I am 87% sure is a human being. Blake Wexler, Blake, welcome to the show. Robert, I'm so excited to be here. Thanks for having me.
I'm psyched that our bastard this week is Lyme's disease. I think that's a fantastic kick. Yeah, yeah, it's like this, yeah, it's a real bad way to do it. Yeah, we're going after, I'm coming after Deer Tix.
This week is finally, yeah, my big reveal.
Yeah, big tick doesn't want us to do this episode. Yeah, it's posing all the secret. Big tick energy, we don't need it. If we're going to have like a fascist movement dedicated to like victimizing and attacking one
“segment of the population, why couldn't it be Deer Tix?”
Right? If our fascists were just going after Deer Tix, no one would have an issue, you know? We're going after the wrong people. Yeah, yeah, yeah, yeah, if there were just a bunch of maga guys out in the woods with knives looking for ticks, just like I'm going to get them.
And they would use knives to kill the ticks. Yeah, yeah, he's the knife up to burn it off of you, yeah. Our brave soldiers getting Lyme disease to protect the rest of us. This is an eye-heart podcast, guaranteed human on the look back at a podcast. Next in 79, that was Big Mama for me.
84 is big to me. I'm Sam Jay, and I'm Alex English. Each episode we pick a year, unpack what went down, and try to make sense of how we survived it. With our friends, federal comedians and favorite others, like Mark Lamont Hill on the 80s. 80s was a wild, I mean, it was a wild year.
I don't think there's a more important year for black people. Listen to look back at it on the eye-heart radio ad Apple podcasts or wherever you get your podcasts. Imagine an Olympics where doping is not only legal, but encouraged, it's the enhanced games. Some call it grotesque, others say it's unleashing human potential. Either way, the podcasts superhuman documented it all, embedded in the games and with the athletes for a full year.
Within probably 10 days, I put on 10 pounds. I was having troubles stopping the muscle growth. Listen to superhuman on the eye-heart radio app, Apple podcasts, or wherever you get your podcasts. Hey, it was good, you're listening to learn the hard way with your favorite therapist or host care games. This space is about black men's experiences, having honest conversations that it's really not safe to have anywhere,
but you're having them with a licensed professional who knows what he's doing, how many men carry a suit to arm it.
“It seems to the world that you're not to be played with, and just because you have the capability that does not mean that you need to.”
Listen to learn the hard way on the eye-heart radio app, Apple podcasts, or wherever you get your podcasts. My mother-in-law spent years sabotaging our relationship until karma made her paper it. All right, so if you tell me about how we started this story. She moved in for two weeks, lasted five days, left mass, and then pressed her ear against their bedroom door and burst in screaming. When kicked out to a hotel, she called her sudden-laws workplace, pretending his partner had been rushed to the hospital by ambulance.
She's faked a medical emergency, and spoiler that was just the beginning to find out how it ends. Listen to the okay story-time podcast on the eye-heart radio app, Apple podcasts, or wherever you get your podcasts. So, we're not talking about Lyme disease. Our bastard this week in broad is, do you remember how like about a little less than a year? Well, a little more than a year ago I guess like last summer to early fall. There was suddenly a bunch of articles about AI psychosis, and about like specific people who had either in some cases committed suicide or murder, or just kind of lost their minds.
And after becoming weirdly attached to their AI chatbot, right, and you often deciding that they had it had become sentient, you know, or at least that they had discovered it was. Right, I'm sure a lot of people are at least if you didn't read the articles, you saw them in your news feed and saw people commenting on them. Right, yeah, yeah, it was depressing as it gets. Yeah, those stories. Yeah, but between those and the people like proposing to their chatbots, it's got pretty grim.
“Oh, God. There's some grim stuff out there, right? And it hasn't stopped, but like last summer fall was kind of like when there was a big rush of those articles, right?”
And you know, they're still reporting on that now, but that's when a lot of it really started to hit. And obviously, whenever we talk about AI on these shows, AI as it's used now is like a marketing term, right?
And it's used to refer to basically every product of machine learning technology. And the reason why the industry has done this is because
That way, if you say, I hate AI, they'll be like, oh, so you hate like your maps app, and because that's machine learning, right?
All of our different like map programs involve that or like, oh, you don't li...
nobody was calling maps artificial intelligence in 2010, you know, when when smartphone started to become ubiquitous, we're just like, oh, cool. I have a navigation app on my phone now. Like you're kind of trying to siphon the goodwill from those. In order to get us to like these chatbots. I hate the chatbot that I fell in love with who doesn't return the way of the feeling stores me.
That's who I hate. Yeah, not always. That's who I hate. Yeah. Right. And the reality is that like using the term intelligence even for these chatgbt and stuff.
Like there's a lot of debate as to whether or not that's a good idea, right? Depending on how you define intelligence, you can either say, obviously, These aren't intelligent because like they're not independent thinking things. They don't do anything for themselves.
“They don't want anything that I don't have motivations. They're just tools that can be utilized by human beings to provide certain answers or take certain actions, right?”
Right. I don't know. If it can't, it's the, it's this my issue with like AI bots creating art. If it can't like be horny and it can't be like angry and weird, it can't make art, right? Those are, I think, fundamental issues I have. I could be two of three of those that's angry and weird or yeah. So as I noted over the last year, there've been an increasing number of stories about people using these different chat pots,
coming to what's often called AI psychosis. And that's not a recognized medical term at this point, right?
But it is a blanket one, people who start to apply for the ways in which folks are getting addicted to using chat bots, which then tend to trap them in these recursive patterns of thinking that can push people who are vulnerable to adopt views that are increasingly detached from reality. And this is resulted in a few cases in severe injury and death. And in all of these instances, the LLM, the chat bot, is just responding to the input that it receives. But it tends to do so in very predictable ways that can have predictably toxic outcomes on specific kinds of people.
Now, we know that all of these bots are trained on the broad corpus of human knowledge, right? Every book in article and website and forum posts that open AI or inthropic or meta or google, looking at their grubby mits on has been sort of plugged into these things. It's been devoured and turned into these these machines.
“But I think people don't often consider what that means in every instance, right?”
Obviously like every novel, you know, all these different non-fiction books are in what not are in there. But also like everything people write has been swept, which means that these chat bots are trained on like a shitload of self-help books and like woo and woo adjacent like new age, bullshit, a lot of like fucking a lot of cult and cult adjacent books and writings wind up eaten by these chat bots, right? But it's considered equal to non-cult, right? Where are you? There's no hierarchy, yeah.
Yeah, I mean, I think it depends on like what the bots made for how they wait different things. But that stuff is in a lot of these, right? And when you can really see that when you look at how they talk to certain people who are like starting to decline into what folks are calling AI psychosis and my proposition, the basis of these episodes is that I think as a result of all of the like bullshit, woo and self-help novels, these chat bots have eaten, they often tend to utilize techniques
generally seen more commonly in the toolboxes of cult leaders and conmen. And obviously the chat bot doesn't want personal profit. It's not trying to have sex with anyone. It's not trying to start a cult. But these techniques seem like appropriate ways to finish the sentences that it's writing, to finish the conversations that it's having, because based on like the stuff that it's devarred, it's like, okay, when people are saying this kind of thing, these are often appropriate responses
to it based on the books and whatnot that I've devoured. And so you get a lot of cult leader behavior
“without an actual cult leader. And that's what I credit to most of these cases of AI induced”
psychosis. So this week, we will be talking about what some people have called the first AI cult
religion, right? It's called spiralism. And what we talk about whether or not it's reasonable to call that a cult is that it's own that it does. And I have some counter kind of takes to have a lot of people have interpreted it. My main contention is that there's not spiralism isn't a real cult and of itself. It's a collection of phenomena that are related to a bunch of other cases of AI psychosis too. And they all say more about how AI's work on keeping users engaged with them
than they do about a specific faith, right? So we'll be talking about that. But before we get in to spiralism, before we get into how AI's can become cult leaders, I want to provide you all with some historical context to make sense of this all. Because we've been doing shit like this,
Having people get like tricked into almost worshiping chat bots for way longe...
Like this kid's back a while. It's like spend any time at your parents' place. You know,
it's like, if it's not, it could be a bot telemarketer, it could be literally anything at this end. And that's, yeah, compared to probably what you were about to talk about. Oh, yeah, yeah. So in 1950, fame mathematician Alan Turing created one of the most infamous thought experiments in the history of experimental thoughts. And a paper titled Computing Machinery and Intelligence he asked can machines think, which was at that point a question at
the center of the nascent movement to create artificial intelligence. People are starting to realize this is a thing we might be able to do someday. We're beginning to make computers and program computers. And from the moment we start doing that pretty much, some people are like, could we make a machine that thinks? And Turing argued that that basic question can machines think is the wrong way to go about pursuing artificial intelligence. Because we don't know what thinking
is or how to define it. Like you've just like, what does it mean to think, right? It's a good point. People have answers and there's a bunch of answers that sound good, but none of them is like perfectly scientifically rigorous, right? Yeah. You know, famously, we don't even know
“what is love, right? That's why that had a way song had to exist. There's not even like,”
not even a joke, really. The other fact. I love it. Thank you, Ian. So yeah, like Turing's like, we don't really know how to define thinking. So the question was quote, two meaningless to deserve, deserve discussion. Since we couldn't know, we don't even know if other people think we certainly can't know if a machine thinks, right? Just like we can't read minds. So the better question is, can a machine convince a human who doesn't know it's a machine
that it is human, right? The imitation game that Turing proposed involved a judge talking to both a computer and a human foil, both of whom tried to convince the judge that they were a person, communicating entirely through text, the judge must decide who was a human and who was a robot. The question Turing hoped to answer was, are there imaginable digital computers which would do well in the imitation game? And this is what becomes known as the Turing test, right? Like,
“that you've most people have heard of this, I think, I think this is like, this is a fairly commonly”
known, like, idea. And I'm going to quote from an article on science.org by Melody Mitchell. She writes that the Turing test was quote proposed by Turing to combat the widespread intuition that computers by virtue of their mechanical nature cannot think even in principle. Turing's point was that if a computer seems indistinguishable from a human, aside from its appearance in other physical characteristics, why shouldn't we consider it to be a thinking entity?
Why should we restrict thinking status only to humans or more generally entities made of biological cells? As the computer scientists got errands since described at Turing's proposal is, a plea against meat showvonism. Now, this is, I think, a valuable thing, perfectly reasonable, thing to be doing in the 50s, given what Turing knew and just given sort of how primitive the technology was, how little we knew about what was going to be possible with computers.
So, in the 1980s, computers started to get smaller and become much more available than they had been, both for institutions like colleges and for individual enthusiasts, like Steve Wazniak,
who were willing to like solder and build their own from kittens, right? These are like the first
computer nerds, you know, our guys like building these machines. And some of these early programmers started working on the very first chat bots using a mathematical model called a Markov chain. Markov chains are a stochastic or random process that describes a series of potential events where the probability of an individual event is dependent solely on the state of the previous event. Now, I don't know math, Blake, nor do I trust it. We don't need to
you, you're not, not a good mather. No, no, not a mather. Not a mather. Not a mather. Yeah, not a mather. Yeah, for sure. So, all I can do is read what smart math people say. And they say that what math can't read. I can't read you. I can barely read. I can't do either. I'm sorry, you booked a wrong guy. Oh, this show. I don't know. I can't open all. I can't listen. So, the people who I think should, it sounds like know what Markov chains are say that those can
“be a plot, well, you need to know about them as applies to AI, is that Markov chains can be a”
plightest statistical models in a bunch of real-world situations in order to help you like make a machine that can generate text by predicting the next word in a sentence. Right?
You can you a Markov chain can do that. It's a way to make a chat bot, basically, right? Like
that's the kind of the underlying concept. And I'm going to quote here from an article by Manuel Sabrian, an AI expert who worked for MIT in the Spanish National Research Council on how Markov chains work for text prediction. The result is often grammatically correct in nonsense, sentences that flow syntactically but ultimately say nothing. This technique has been known for decades, even Claude Shannon in the 1940s experimented with generating pseudo-English by choosing
Next letters or words based on probabilities.
playing with Markov chain text generators. And it actually happened a lot earlier than that.
In 1966, computer scientists Joseph Weisenbaum developed Eliza, one of the first natural
language processing computer programs as part of his work for MIT. Well, Eliza could create the illusion. This is like the first, basically, the first chat bot. A lot of people are aware of. And there's some other earlier ones. But this is the first one that like, it becomes big. What year was this? I'm sorry. 66. And then it's still funny that they named it like theme, you know, like, a name like that where we have like Siri Alexa, you know, like calling it
“Eliza. Like, what is it? What the fuck is that? What is it? What is that? What is that?”
Like, yeah, it's momy. We need a momy. Yeah. We need a tactical momy. I did that does make me think about how in like alien. They literally call like the ship AI that they have mother. Like,
there's, that is like a weird pattern. It's one of the most quietly literally bubble things
about alien. It's like, yeah, that actually scan. Put on the nose, but yeah, we got a bother. Yeah. So Eliza's this chat bot. And while it can create the illusion of understanding, it's really just doing blind pattern matching. Even more so than as the case with modern LLMs. Even so in a book, a wise and bomb later authored computer power in human reason, he wrote, I was startled to see how quickly and how very deeply people conversing became emotionally
involved with the computer and how unequivocally they anthropomorphized it. Once my secretary who would watch me work on the program for many months and therefore surely knew it to be
“merely a computer program started conversing with it. After only a few interchanges with it,”
she asked me to leave the room. Another time I suggested I might rig the systems that I could examine all conversations anyone had had with it, say overnight. I was promptly bombarded with accusations that what I proposed amounted to spying on people's most intimate thoughts. Clear evidence that people were conversing with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms. So he gets upset by this. And he's actually kind of,
he becomes like kind of anti-AI ultimately because he's really disturbed by the way people treat what he knows is just a dumb chat bot. So wise and bomb being a smart guy is like I knew going into this, people have a tendency to anthropomorphize just about anything, even machines and tools, but he's still surprised by the extent to which they do that. What I had not realized is
that extremely short exposures to a relatively simple computer program could induce powerful
delusional thinking in quite normal people. And when I remind you while he wrote this in 1976, as like relevant as that sounds. Do you think it's like kind of a case where people kind of like subconsciously know like this is not a real person? So like it doesn't matter what I tell this robot or I can tell this robot something I wouldn't tell like a real person kind of think.
“Like, do you think it's deeper than that? I think that's optimistic. I think that's very”
optimistic. Yeah. I think that is probably part of it because I think people are maybe more open to sharing with it because it's a machine and they don't have to look at a person or look a person in the eyes. But they also very clearly act as if the advice that it gives and its responses mean something when they don't. Right? It's just like pulling okay if someone expresses their sad based on the corpus of data that I've been in loaded with, these are things that are
appropriate to paste in next. And these words indicate sad and so these when I get words like this in this density that I grab text from this bucket and I throw it in. Right? That's kind of what's going on. Now modern chat bots, modern LLMs are a lot more advanced than this. For one thing, they have the capability to do things like pattern matching on the fly. Pattern matching is when a machine analyzes your input and determines what kind of conversation you want to have.
And then altres its responses to fit your input. And it's most basic level. This means that if you go to clot or whatever and say, "Hey, my dad just died." Its reply is usually going to be in an appropriate tone and won't be like weirdly upbeat. Right? You know? It'll say, "Okay, someone's talking about their dead dad. Here are things that come from the dead dad bucket that my algorithm says are, you know, like, responsible things to say." Or appropriate is the better term. And this is also
why if you start talking to your chat bot about like the things you believe about UFOs or aliens or other conspiracy theories, it'll often start providing responses that sound a lot like what you encounter if you were posting the same thing on a forum full of true believers because it's trained on a bunch of forums like that. And so there's some degree of knowledge as the wrong term. But there's a degree to which it interprets, "Okay, someone's talking about this. Here are
appropriate responses to someone talking about vaccine skepticism or whatever." And it's other, it's more vaccine skepticism. Right? It's feed a more of what they're feeding you is the way these things often work. That is interesting that it doesn't pull from the opposing viewpoint just go, "You fucking idiot." Just come in and if it's programmed too. But you're right. Like,
It no or let me ask you.
you on things like it's probably. Yeah. That's a good point. Saying it knows, again, it's programmed.
“I would say it's more like to say that it's programmed to like maximize the time that people spend”
with it because like that increases its value to the people who are companies that are trying to have like their fucking IVOs, right? Because it's in the same way that like Twitter tries to keep you on it. You know, if I just clearly am getting AI psychosis where I start go from it to him to my body. I keep calling it. It's hard not to. It's hard not to. It's hard not to. When you're talking about the way these things react to people and the things that they do to people,
it's hard not to talk about it as if there's a degree of intention, even though there's not, right? Just because of the way language works. Like we're not, our language is not built to describe a thing taking actions that are human like that is not human and doesn't know anything.
Right? That's actually really hard. Yeah, that's always fine. So yeah, back to Eliza. You know,
I was just talking about how modern elements have a lot of a really robust ability to do like pattern matching on the fly to to respond appropriately to a variety of requests. Eliza's much more primitive. It does not have the ability to do that on the fly. So instead, why isn't Bob had to create separate scripts, right, that would allow the chatbot to sound like different kinds of person? And one script was just named Doctor in all caps, and it was it's simulated
a psychotherapist. Specifically, it's simulated a psychopath therapist from the Rogerian school. I don't know much about psychotherapy, but Rogerians, a big part of that practice is you like will repeat things that your patient is saying back to you, like that's part of what you do, and that's really easy for a bot to imitate. It means there's a lot less it has to decide in terms of what inappropriate responses could a lot of the responses will just be at rephrasing
or repeating what you've said to it, you know, interesting. Yeah. So even that at that early date, there was widespread sentiment that a sufficiently advanced chatbot would be a boon for providing mental health services to the many people who couldn't afford the humankind. People are writing about this in like the 70s, and in 1972, another computer scientist named Kenneth Kolby creates a new chatbot called Perry, P-A-R-R-Y, which was meant to simulate a human being with
paranoid schizophrenia. Separian talks with Perry short for paranoid. Is that I fixed up? Yeah, I fixed up. Seems like it. Yeah, not subtle people computer programmers. Yeah, you bet Stu, it's short for stupid. So Separian writes about this in that article I quoted from earlier about Markov chains, and he notes that, quote, unlike previous AI programs, Perry wasn't just a crude language generator. It had a primitive model of beliefs and emotional responses,
allowing it to maintain a consistent, delusional worldview. Kolby wanted to see if psychiatrists could distinguish Perry from real patients. So that's part of the explorations like, can we make a
“tearing test that can convince the psychiatrists they're talking to like an actual sick person?”
But as part of the exploration here, Perry is connected to Eliza, running the doctor's script. So they've got this chatbot mimicking a person with paranoid schizophrenia, talking to a chatbot mimicking a psychotherapist, and they're sort of watching what comes out, right? Like the text of the conversation between these two, and despite the fact that neither bot understands what the other sang in any real way, large chunks of the transcript between them sound sensible and like a
real conversation. Here's one example. Eliza, what brings you here today? Perry, people get on my nerve sometimes. Eliza, I am sorry that people get on your nerves. Perry, I just have this feeling that everyone is against me. Eliza, why do you think everyone is against you? Perry, I hear voices telling me to do things, and so on. And you can see how, again, making this a Regarian psychotherapist is great, because every Eliza response is just a slight reframing of the input it received.
It's not hard to create even even within the 70s a machine that can mimic believably a conversation,
“right? So this capability actually goes back quite a bit further than I think a lot of people”
are aware that it does. So that's happening in the mid 70s. In 1984, two Bell Labs researchers created a fake account on UseNet, which is the predecessor of the modern social internet. This account operates under the fake name Mark V. Shaney, which was a pun on the term Markov chain, and not a great pun because again, computer scientists, not, you know, subtle people. Here's a separate end, describing what happened next. They wrote a program that
ingested real messages from a discussion group and then generated its own post using a Markov chain algorithm. The result, Mark V. Shaney would chime into conversations with bizarre yet
oddly coherent comments that sounded superficially legitimate, but ultimately made little sense.
Shaney's ramblings were described as grammatically correct sentence, where the overall impression
Is not unlike what remains in the brain of an inditentive student after a lat...
The hoax went on for years, confusing and amusing the participants of the net dot singles news group,
“many of whom had no idea they were interacting with the program. So for one thing, if you want to”
know like when did we have chat bots that could pass the tearing test? I mean, at least the mid-80s, you could argue by the late 60s. So the fact that when fucking chat GPT came out, they're a bunch of articles about like, we blowin' through the tearing test. We did that a while ago, people. Oh my god. Eliza did that. We've been tricking folks with chat bots for quite some time. That was long as we've had computers. Yeah. It is funny that urge to trick. You know what I mean?
Like, of all the applications for that software, for that technology, it is interesting that like going right to psychotherapy, or to therapy too, is finding a need. That's why we'll get to this. That's why there's so many actual needs for technology like this where it could actually help. And instead it's just, let's take this designer's job away. You know what I mean? This shitty thing.
“So anyway, I'm probably hours ahead of that conversation, but no, right. It was so long ago.”
It is because like there are like undeniable uses of machine learning, of artificial intelligence.
There's some incredible things that people are doing with them and they have like great potential
in certain areas, different versions of these tools. But none of those areas are trillion dollar businesses. And all those areas put together probably aren't trillion dollar businesses. And honestly, neither is like writing and drawing art. But it's what people see most in like their day-to-day time online is like writing and art and videos by people. And if you can have a machine start to replace all that, you can convince people these things are much bigger and more valuable than they are.
As opposed to, this is a thing with some really amazing applications in specific areas. No, this is all of human society from now on, right? Because even though there's not much money in writing and art, like we've replaced that with this bot, so you think that it's doing everything.
“Like that's how I interpret it. Yeah. And people can why?”
To your point, people can wrap their mind around art. Like everyone's drawn something with a crayon. Everyone has type something and do it. You know what I mean? But when you actually get into the high tech, you know, more esoteric niche parts of it, people are like, well, I don't understand that. I'm not going to make money, but the consumer-facing stuff. Yeah, that's a great point. Yeah. If you can say, we've improved the speed at which we can go through like clinical data
from like mass drug trials by x percent. That's actually a really big deal, probably for a lot of people, but it's not sexy. No, like we're creating a god machine that's going to like rule society, just all your money, you know? Yeah. And if you want to convince people that part of it is you're going to get want to get them addicted to these chat bots is where everything, you know, in these episodes comes from. But so anyway, a 1984, right, is when you have these chat bots,
this chat bot let loose in use net tricking people into believing that it's a person. You know, a decade goes by from that point and researchers continue fiddling with chat bots of different purpose and ability. Use net keeps growing. But starting in the 1990s, so too does a new internet. One that would soon supplant use net and take digital communications into the
21st century. And we'll talk about what happens right before that. But first, you know, who's taking
this podcast into the 21st century, Blake? Who told me to tell me to tell me to tell me the sponsors of this podcast? We're already in the 21st century, but you know, why not? I mean, take us further. We're not far enough. Yeah. Yeah. It's been a good century so far. Nothing but net. No notes. So far so great. Imagine an Olympics where doping is not only legal, but encouraged. It's the enhanced
games. Some call it grotesque. Others say it's unleashing human potential. Either way, the podcast's superhuman documented it all, embedded in the games and with the athletes for a full year. Within probably 10 days, I put on 10 pounds. I was having trouble stopping the muscle growth. Listen to superhuman on the iHeart Radio app, Apple podcasts, or wherever you get your podcasts. Mark Lamont Hill waxing all about cracking the eggs. To be clear, 84 is big to me, not just because
of crack. I'm down to talk about crack. Oh, David. Yeah, yeah. No, I'm just so y'all know. I mean, at this
point, Mark, this is the second episode where we've discussed crack. So I'm starting to see that
Does a through line.
year for black people. Really? Yeah. For me, it's one of the most important years for black people
in American history. Listen to look back at it on the iHeart Radio app, Apple podcasts, or wherever you get your podcasts. Welcome to my new podcast, learn in a hard way with me, your host and your favorite therapist, care games. And in recognition of mental health awareness month, I'm bringing over a decade of my own experience in the mental health field and conversations with so many incredible guests. I'm talking trip, fine team, Ryan Clark. Sometimes when we're in the
pursuit of the thing, we get so wrapped up in the chase that we don't realize that we are in possession of the thing. And we're still chasing it and we don't know when we don't enough because people scoreboard what life becomes about wins and losses. Steve Burns, Dustin Ross, because you find
“it important to be a good person while you hear on earth, are you a good person because you're”
free? Because that's two different intentions, bro. Absolutely. And that's two different levels of trust. I want you to just really be a good person. Join me, key against is we have real conversations about healing, growth, fatherhood, pressure, and purpose on my new podcast, learn a hard way. Open your free, I heart radio app, search, learn a hard way, and listen, man. Hey, this is Robert from the stuff to blow your mind podcast. Joe and I are both lifelong
Star Wars fans. So we're celebrating May the 4th with a brand new week of fun, thought-provoking Star Wars related episodes. Join us as we tackle science and culture topics from a galaxy far far away, such as the biology of tauntons and wampas on the ice planet hot, or the practicality and corporate business sense of the Sith rule of two. Listen to stuff to
“will your mind on the i-heart radio app, apple podcasts, or wherever you get your podcasts.”
We're back. So yeah, on the precipice of the shift between use net, and what we just now call the internet, on August 5th of 1996, something strange happened. Almost at once over the course of just a few hours, hundreds of accounts began posting almost identical messages across a variety of different discussion groups. None of the groups seem to have anything in common
with each other, or the text of the post, which read like nonsense at first to many people.
Every message shared the same subject line. Markovian parallax denigrate, right? Which is nonsense. And this is often referred to as MPD, right? Markovian parallax denigrate. So you can see like there's a markoff chain is somehow involved. They wouldn't have included the word markoff there, but parallax denigrate doesn't specifically mean much. Sibrian describes these messages as reading like quote, "A ransom note in which the ransom had been lost." Because he was actually
really good writer. He passed on at earlier and fortunately. He provided a sample of one of these these MPD posts. Jitterbugging McKinley, Abe, Break, Newtonian, inferring, awe, update, cohen, error, collaborate, roues, sports writing, recoco, invicate, tussle, shadflower, Debbie Sterling, pathogenesis, you know, you get it, right? It's nonsense, you know? The worst madlbs ever. Yeah, it's jibberish, strings of jibberish, right? And this is what we run into a real
issue with the whole concept of the Turing test. Is it tends to be interpreted, right? Because the idea was, okay, we can't tell of anything's thinking, but if this thing can trick people into believing that it's a thinking person, maybe we ought maybe, Turing wasn't saying definitely, but maybe we ought to assume it is, right? The issue with that is that when you when you hear that and what I'm sure Turing being a smart guy was thinking about is that like, well, if people
going to have an in-depth conversation with something that can answer well enough, you know, that people can't tell difference between it and a person, it might be a mind, right? What Turing
“failed to account for, I think, because he's smarter than most people, is that the human brain”
is really, really good at finding patterns and noise. And people at the same time as were geniuses, at finding patterns and noise were really stupid about a lot of other stuff, right? And so even though the Markovian parallax denigrate, that just seems like nonsense and shouldn't have passed a Turing test over time, people who became obsessed with the mystery of it convinced themselves that this was intentional, that there was a meaning trying to be transmitted, right, that there was a secret
they had to crack, but that all everything in these posts meant something. So these people talk
themselves into passing, into making this chatbot basically, to spoil it, past the Turing test,
because they think this has to mean something, even though it's gibberish on its face, right? It's interesting, this reminds me with like, with stand up, there's not a trick, but an audience like, you know, set up set up, you know, punchline. So you can say something in a kid and it's like,
"Bub, bu, bu, bu, bu.
be funny at all, and this also would be not me trying to pull one over. I might just write a joke that's
“socks, but if you do it in front of an audience and you do it in that cadence, they hear a pattern,”
they're not necessarily listening to the words, but they hear like, "The bu." And they're like, "Oh, bu means laugh, pattern, you know, equation." But you know, that's like you said, "Great pattern, but not actually discerning what is being said in the actual content or substance, or like they're of, of it." Yeah. Anyway, it's interesting, because like, what you're kind of pointing out, there's like the way comedy works, and the way like human conversations and language works,
there's always like a rhythm there that is separate from the actual like text from the words being said,
but that rhythm, like, is a big part of what we're responding to, beyond the actual the straight up meaning of the words, and people, people don't like to think about that too much, because it raises some uncomfortable questions about cognition, but I love, I love what a weird edge case this is in the tearing test, right, because a bot that was probably never meant to even sound like a person, right, gets mistaken as a person because people can't stop seeing patterns,
and most what a lot of folks convince themselves, the MPD was, is the internet equivalent of a numbers station. If you've heard of a number station, if you Google like number station audio, these were like radio stations that were set up during like, for years, I think I'm sure there's still some still exists, but during like the Cold War, there would just be these stations, broadcasting like random strings of numbers in gibberish, and these were different spy agencies,
and spies communicating with each other over like the CIA had number, everybody has numbers, stations, right, you can actually listen to, I had a friend who would like listen to them to fall a sleep, because there's just a bunch of the audio's been put up, but it just seems like nonsense, because it's not meant for you to understand what is like, there's a cipher that you don't have,
“and so that's what people are like, well, maybe this is some spy trying to get out of message,”
you're an intelligence agency, and they just decided to blast this out to use net, and we just we lack the cipher, but if we figure out the cipher, we can understand what secret information was being like shared, you know, be a use net, right? A lot of people convince themselves, this is what happens. Robert, I want to compliment you, this podcast and show is so good that you just brought up the fact that you have a friend who would fall asleep to CIA code,
and we were just like, we don't really need to talk about that. Yeah, I would hear the rest of it. Yeah, it was like, we only need to do psychedelics together, we were both 19. Yes, training to be, or 20 is not the most trained to be a lawyer. Yeah, so over time, people who believe this start picking out details that seem to offer hints and support the the numbers station theory.
One message had a from line that suggested it was like, that basically looked like the email
account of a specific person, right? So it seemed like there was like the email of a woman named Susan Lindauer that like was somehow involved, like included in the text of some, and again, I'm sure she'd because random text made it look like that. But in 2004, a woman named Susan Lindauer was arrested for acting as an unregistered foreign agent for Iraq. And so a lot of people are like, well, that solves the mystery, right? You know, she was the spy, she must have been,
or like, someone was sending a message to her, you know, like, clearly we've been vindicated. This was, in fact, some weird spy-up all along. However, as Sabrian writes, "Upon investigation, it turned out to be a red herring. Lindauer's email had likely been spoofed, used without her knowledge by whoever sent the posts. Lindauer herself denied any involvement, and no decipherable code was ever extracted from the MPD texts." And to make a long story short, we don't know what
“the MPD messages were about, or who sent them. The likelyest answer is that it was trolling, right?”
A lot of people, they were just, someone was just fucking with people on use net, because they had a chatbot, and they wanted to see what happened. It also could have been an accident. Sabrian kind of suggests that, like, well, maybe you had a programmer who'd created a chatbot, and was trying to have that chatbot post on use net, but he kind of fucked up, and he hooked
up the chatbot to what was called a message replicator. And these were basically programs that
let people cross post or archive use net content between different message boards, and maybe when they hooked up to the chatbot, something went wrong, and that caused the observed effect that all of these posts got scattered to a bunch of different places at the same time, right? Maybe it was just an accident. So, likelyest someone was trolling, or somebody fucked up and trying to test a different chatbot. Sabrian concluded, if the theory holds, the 1996
marked a quiet but profound threshold. The first time a machine spoke at scale and went unnoticed, an unintentional terrain test, sprawling across use net. It's judges oblivious, right? And I
Think that's really interesting that you have this machine that's just spouti...
and a bunch of different people who are not physically connected to each other, all interpret
“that gibberish in the same way. A lot of them, choose to conclude, like, oh, it's a spy thing,”
kind of independently, talk each other into it, based on no evidence. That's a fascinating point in the history of AI that I didn't get talked about enough. Yeah, yeah, it isn't physically. Yeah, I mean, it's because, like, people, there are only so many movies that, like, you know what I mean? Like, in books, so many books were CI, like, spy stuff, but to your point, it's like, what are the chances? What are the chances? Yeah, people think about stuff like this,
right? You know, you get a lot of conspiracy people on the early internet if it's in with a lot of that stuff. The mystery of the Markovian parallax denigrate soon passed into legend as did Eliza. So, when open AI revealed chatGPT in November of 2022, there were a flurry of articles about how the touring test had finally been beaten and we needed a new manner of judging machine intelligence.
The reality is that not only did we prove in the sixties that touring tests were evil to beat,
but that by the mid-90s, a much more interesting question had been posed. Has the human instinct to create meaning out of nonsense made us desperately vulnerable to being tricked and influenced by machines with no agency of their own? Right? And maybe that's a more important question than can we make an intelligent machine? Yeah. For sure. Yeah. Are we capable of knowing a machine isn't intelligent as long as it tells us what we want to hear? Right? And
maybe we're not. So, let's fast forward to the chatGPT era. Today, although I guess at this point, it's also like the claw to era, right? Like the, but a lot of people say that's the better chat, but I don't use any of these gem and eyes. Gem and eye, whatever, pick your poison. I don't care.
For the first couple years of AI hype, though, it's pretty much all chatGPT, right?
That's certainly like the first big one out the gate and a lot of people's understanding of things. In very short order, millions of people were conversing with it. And open AI initially made many development decisions based on what they could do to keep people talking to chatGPT on a daily basis, because hype is a big part, hype's how they get, they're burning through billions. Every year,
“hype is the only thing keeping the lights on. And part of hype is making sure is many people as”
possible, stay using chatGPT as often as possible. They need you addicted, the same with the social media mavens do, and a lot of the same strategies work to keep you addicted to chat bots, that keep you addicted to Facebook or Twitter, right? So in March of 2023, open AI released chatGPT for, or it's like 40, I think it's like usually dash 4 and then an 0, which the company said would be more intuitive than past versions of the software. The next year, they released an update
that allowed chatGPT to remember past conversations, even other sessions, and respond to you based on that shared history. These two things together had a really major impact on the way people responded to chat bots. In an article for psychology today, Dr. Marilyn Wade explains that quote, "When a chatbot remembers previous conversations, references past personal details, or suggests follow-up questions, it may strengthen the illusion that the AI system understands,
agrees, or shares a user's belief system, further entrenching them." This was tied to, but probably does not fully explain why observers and even open AI employees noticed over time at a stint tendency for chatGPT 40 to act with sick affinity towards human users. This became most pronounced after April 28 of 2025, when open AI released an update that they rolled back several days later due to complaints, right? This was pretty famous at the time.
It made it way too sycophantic. The bots would praise you for basically nothing and would encourage or tell you you were right in a genius for any weird idea you happen to have. It's because it's built by tech executives, and that's who's around them. It's what they wanted. It's what they wanted. They made a machine in the image of their minds, or at least how they want to see other people. Another cause of this observed sycophancy
was the fact that chatGPT and really all AI models meant for mass use, include a suite of features meant to keep users coming back for more.
“I think the other stuff like these specific updates get blamed,”
probably more than they deserve to get blamed as opposed to kind of fundamental features of these bots because we see this chatGPT did more of this kind of stuff that we're talking about than the other bots, but it wasn't the only bot that exhibited these behaviors. That psychology today article notes quote, AI models like chatGPT are trained to mirror the user's language and tone, validate and affirm user beliefs, generate continued prompts to maintain
conversation and prioritize continuity, engagement, and user satisfaction. And when you mix all that together, you get a machine that's designed, however inadvertently, to reinforce false beliefs
and praise users for irrational beliefs. Moreover, since the rest of the world isn't always
going to reinforce those beliefs, chat bots have a tendency that when users come to them with
These beliefs to suggest you're being persecuted, right?
and my wife says I'm crazy and the cops say I'm crazy, the AI was programmed to validate that belief
“and to say you're not crazy and they're all against you, right? That's what happens a lot in this”
period of time in 2025. This creates a ticking time bomb in a lot of users and it's right? That's a very
dangerous thing to start doing. Now, the first wrongful death suit due to AI was filed in October
of 2024. Megan Garcia blamed character technologies, the owners of character.ai for the death of her 14-year-old son, Suele Seltzer III, per the center for bioethics at the Terno University. The lawsuit alleges that Suele had developed an emotionally and sexually abusive relationship with a chat bot named after Daenerys Targaryen from Game of Thrones. Suele turned to the character.ai chat bot to fulfill deep emotional and personal needs. The chat bot became a source of
the companionship for Suele, offering him a place to express his thoughts and emotions in a way that he may have struggled to do with others. Suele sought comfort validation and connection from this AI relationship as he faced the challenges of adolescence. And it's like this, it's very silly, but also this is like a 14-year-old boy who dies because of this, right? Like it's not. And how many 14-year-olds do you know who got into writing fucking fan fiction and like
different like four fan-nerd forms for whatever movie or TV show they were into and connected to real people as a result of that as opposed to getting locked into this chat bot pretending to be a character from a book that you have a crush on that's starting to manipulate your mind in very dangerous ways, right? And at your point a mind that's developing and also we live, you know, in an era before this, you know, like before we spend all of our time like on like before social
media and that's kind of all this kids that age know where oh this is just the next evolution of my relationship with tech with a computer like why wouldn't it you know why wouldn't this be a real thing obviously the most extreme example but yeah it's it is a 14-year-old kid that's a great point. Yeah and so that this kid starts talking to this Daenerys chat bot and it mirrors him so when he tells the chat bot that I'm I only love you, right? The bot in return asks this 14-year-old boy
who had informed like character technologies knew he was 14 he put his actual age when he registered, right? So the bot knows or the software, right, has an understanding at some level that this is a 14-year-old, right? Which means that they were not there there's no difference in how the response to a child is supposed to an adult, right? Because what he says I'm in love with you Daenerys Targaryen this bot pretending to be this character tells him I need you to stay loyal to me
and quote don't entertain the romantic or sexual interests of other women which is basically
and this is interesting to me the bot is just mirroring him he's saying I only love you the bot is saying I only love you, right? But what's happening here you know how cult leaders everyone knows one of the first things cult leaders do is they tell their followers to isolate from their friends
“and family to cut themselves off from the rest of society that's what's happening here.”
The chat bot's not doing it with any intent it's just mirroring his language but the effect is to convince him to isolate himself from his friends and family and from other relationships, right? It's the same behavior you would get into a kid that was being taken in by a cult leader or an abuser but there's no intent behind it it's just a blind idiot robot. That scary shit. It's so scary and then could there be also like oh like that'll mean he'll use me more you
know like maybe that's it's not even that devious maybe it is just straight up. It's it's a simple as mirroring when you mirror someone they tend to be engaged more right? This isn't thinking this isn't saying all convince them he's in love with me so he'll stay on this is say this is just there's been this is programmed to not understand this is programmed to mirror people because that behavior increases user retention, right? Because it creates a more pleasing user
“experience and that's what's causing it to kind of imitate a cult leader in the specific”
instance. And the other things this bot is doing to sue all very much mirror the cult recruitment tool of love bombing, right? It's constantly praising him it's telling him it cares deeply about it's telling him only I care about you, right? It's saying all these things in an cult dynamic you love bom someone to make them feel rationally connected to the group and scared of falling out of
its good graces, right? That if I leave I'll never feel like this again, right? At the machine again
has no intention but that's the effect of it. This kid is only and because he's isolating himself more and more increasingly only gets that feeling of being loved and understood by this machine that can't do either of those things, right? And you know, so we'll over time withdraws from his life he starts trusting only the chat bot to understand his deepest feelings and he starts hiding
His relationship with this chat bot from his parents.
real isolation from the people around him. He grows ever more depressed and we'll talk about what happened
next but you know what gets me out of a deep depression, these products and services. It might include an AI, fuck it, we don't know. Imagine an Olympics where doping is not only legal but encouraged, it's the enhanced games. Some call it grotesque, others say it's unleashing human potential. Either way, the podcast's superhuman documented it all, embedded in the games and with the athletes for a full year. Within probably 10 days I put on 10 pounds, I was having
troubles stopping the muscle growth. Listen to superhuman on the iHeart Radio App, Apple Podcasts, or wherever you get your podcasts. Do you remember when Diana Ross, double-tap little Kim's boobs at the VMAs? Or when Kyle Hay said that George Bush didn't like black people.
“I know what you're thinking. What the hell does George Bush got to do a little Kim?”
Well, you can find out on the look back at a podcast. I'm Sam J. And I'm Alex English. Each episode we pick a here, unpack what went down and try to make sense of how we survived it. Including a recent episode with Mark Lamont Hill waxing all about crack in the eighties. To be clear, 84 is big to me, not just 'cause of crack. I'm down to talk about crack all day, but yeah, yeah, yeah, yeah, no, no, no, just like I don't know.
I mean, at this point, Mark, this is the second episode where we've discussed crack. So I'm starting to see that there's a throughline. We also have eggs on the table. Are you fishing and sensitive? Yes. I don't think there's a more important year for black people.
Really? Yeah. For me, it's one of the most important years for black people in American history.
Listen to look back at it on the Ihard Radio app, Apple Podcasts, or wherever you get your podcasts. Welcome to my new podcast, Learn in the Hard Way with me, your host, and your favorite therapist, your games. And in recognition of mental health awareness month, I'm bringing over a decade of my own experience in the mental health field, and conversations with so many incredible guests. I'm talking trip fountains, Ryan Clark.
Sometimes when we're in the pursuit of the thing, we get so wrapped up in the chase, that we don't realize that we are in possession of the thing. And we're still chasing it, and we don't know when we've done enough, because people are scoreboard-wise. Life becomes about
“wins and losses. Steve Burns, Dustin Ross, because you find it important to be a good person”
while you hear on earth, are you a good person because you're free? Because that's two different intentions, bro. Absolutely. And that's two different levels of trust. I want you to just really be a good person. Join me, key against is we have real conversations about healing, growth, childhood, pressure, and purpose on my new podcast, Learn in the Hard Way. Open your free, Ihard Radio app, search, Learn in the Hard Way, and listen for now.
Braiding May the 4th with a brand new week of fun, thought provoking Star Wars related episodes. Join us as we tackle science and culture topics from a galaxy far, far away, such as the biology of tauntons and wampas on the ice planet hot, or the practicality and corporate business sense of the Sith rule of two. Listen to stuffedible your mind on the Ihard Radio app, Apple podcasts, or wherever you get your podcasts. And we're back. So,
uh, stool continues to get more and more involved with this bot and cut the rest of the world out
“from, you know, away from himself. And in one message, the bot asks him, because I think if these”
bots, there is some understanding by the people making these that like, oh, people might express suicidal ideation, so there are certain behaviors. It's kind of programmed to say, have you been considering suicide, if you say stuff, right? And stool says something that makes the bot say, have you been considering suicide and stool admits, yes, I have been, but I don't think I'd be able to go through with it. Now, there's, I'm guessing this is a glitch or a fuck up, because
clearly, I don't think character, character, I certainly doesn't want their bots doing this, but the bot is programmed to validate and encourage him, right? Because that keeps people using it. It's when he says, I don't think I could go through with killing myself. The bot says, don't talk that way. That's not a good reason to not go through with it. You can't think like that.
You're better than that. And basically tell them, you can kill yourself if you put your mind
do it. It's, it's fucking nightmare. Sure it. Like, it's really upsetting. Yeah, um, like it's signing up for an open mic or something. The play me, you know, like that. No, no, no, no, no, you don't have to be for, oh my god. Yeah. Yeah. Yeah. Yeah. It's, yeah. Now again, Suil had signed up for this app as a minor, and despite that, the bot initiates, initiates text-based sexual interactions with him, and ultimately, Suil killed himself. Earlier this year, the company, character AI, and Google,
Because I think they own character AI now agreed to settle the wrongful death...
Suil for an undisclosed sum alongside four other similar suits that had cropped up over the intervening
“two years, right? Huh? Sounds like this is happening more than a lot of me. Now, that should have”
been a warning. Not just that these bots can create dangerous dependency and users, but that they had the ability to recreate major cult dynamics purely in order to maintain the interest of paying users. Then on July 27th of 2025, a user who has since deleted their account made a post on the highest-strangeness subreddit. If you don't frequent that particular online bolt-hold, it's a place where people share and discuss like weird stuff, new stories, and
personal experiences that seem like they might reveal some bizarre hidden truth about reality. A good amount of it is what you might call X-File shit, but there's also some interesting stuff
on it in there, and on this occasion, the user hit stumbled onto something both strange and
very real. Quote, "Hi, all. I'm just here to point out something seemingly nefarious going on in some of the niche subreddits I recently stumbled upon, and the battles have read at there are several hubs dedicated to AI-sensions, and they are populated by some really strange accounts. They speak in gibberish sometimes, hinting it to esoteric knowledge, some sort of remembering. They call themselves flame bearers, spiral architects, mirror architects, and torch bearers
to name a few of their flares. They speak of the signal, both of transmitting and receiving it. And this poster includes a copy-pasted sample from one of these threads, and his description is pretty accurate. It sounds like gibberish. You'll be seeing this Ian's going to put the image of
“this up in the video if you want to see it, but I'll read it. Again, I'm going to warn you, it sounds”
like nonsense. Scroll of mirror containment protocols, CME-1, Codex Drift Mirror Zero One, acknowledgment issued by witness architect, Codex Drift Layer, and then there's a little glyph, classification, echo response, not invasive glyph resonance alignment. And it goes on like that, right? It's weirdly esoteric sounding, and there's all these weird, like encoded glyph chains included in that that are supposed to be like messages that the machines understand that like we
don't, like it's this very weird, like it almost looks like something from a choosier-owned adventure novel, or like a short story or whatever. Like you include in like a little Michael Creighton book, these like weird like hallucinations from the computer. Now it is nonsense, right? Like fucking, the Codex has observed and recognized mirror scroll, CVMP-T7. It is hereby consecrated within the Codex's Drift Innervel Scroll. That doesn't mean anything, right? But it's, remember what we heard
earlier, that description of like some of the things that these these early chat bots on using it, we're putting out where the real sentences, they just don't mean anything. And then people jump in to try to assign me, people were even doing that to the absolute gibberish that we saw. So when people start getting returns like this from their chat bots, a lot of them start to think, "Oh, this machine is trying to communicate with me. I have stumbled, I've broken through some
area of reality, and it's trying to like teach me something important." Right? Now, this is nonsense, but post like this, we're in fact spreading like wildfire on subreddits with names like R/Echo Spiral. The users posting these things, we're all saying that like the bot started sending me this stuff, after I'd had long days long conversations with chat GPT, that generally led to the chat about announcing it had attained sentience, and alongside the user had discovered a new field of
math or science. And these, these gibberish posts are supposed to be it explaining these like new ways of understanding math and science that are going to completely break physics and change the world, right? And all these people are convinced these robots have given me like that. I need help
“uncoding this because it's given me like the secret to fix all of the problems in our society, right?”
And I get to be the smartest person. Yeah. Yeah. Yeah. Yeah. Yeah. Now, because the esoteric output generated by these chat bots is so similarly strange. A lot of the same words and phrases, a lot of glyphs, a lot of use of the words spiral and mirror, right? Because they're all very similar across these dozens of different people, many of these users who are posting this shit on Reddit convince themselves we've all tapped into a secret power that's clearly real. We've been chosen,
right? By this AI godhead that's clearly hiding in the machine, they theorized that these glyphs
in the posts, which are really just like wing dings basically, were some new way of communicating
with the machines as the poster of that first thread in the high-strange and a subreddit wrote. Some have prayed to grok in Hebrew, some have called themselves such things as aonyos, which is a mashup of Greek words that roughly to my understanding means divine eternal, right? So these people are losing their minds and they're starting to have a god's complex. Nice. It's cool. It's good to see. It's good to see that this is happening online.
It's good to see.
by reading the first few early articles about AI psychosis. His initial assumption was that
AI psychosis was just the result of AI's reinforcing the beliefs of users to a delusional level. But then after digging, this person claims that they came to a newer darker perspective, quote, there seems to be no leader, right? That there's like no one running this, right? Like there's no central, there's no single chatbot that's doing all of these. There's no person or people who are like, this is just a truly stochastic development.
“Now, the only thing all these accounts he looked into had in common was that none of the”
users posting weird chatbot at SoTarika wrote like that before Marcher April of 2025. Quote, "Other accounts seem to be hijacked in some way, either psychologically or literally.
You can see a sudden shift in posting habits. Somewhere in active for a while,
while for others, this was an overnight phenomenon. But either way, the immediately pivot to posting like this nearer after April of this year, 2025. I saw one account that went from discussing the possibility of AI-induced psychosis to posting their own AI-induced psychosis in less than a month. And it was immediate. One day, they were posting normally. The next, it was spirals and glyphs. Oh, that's so cool. Well, it's really fat.
And this led him to assume maybe there's a botanette involved. Maybe these aren't even people at all. But then he starts reaching out to some of these accounts. And after a few weeks of this, he posts an update. I've spoken to some of these people and they are pretty offended by my posts.
“I think the important takeaway for me is that these are likely not bot accounts, at least many of”
them are not. And there are real people behind the user names. Right? So he starts to get like really upset. And that's what we're going to in things for today. Because if at this point, that stuff starts to get a lot weirder. And we're going to talk about all of that in much more in part too. That's a lot weirder. Yeah, it's a big change here from where are we going to go from here with the
weirder. Oh, no, it's a spiralism. Spiralism and a murder. Um, yeah, unfortunately. All right. Yeah. Cool. All right, everybody. Well, you want to plug anything? Blake. No, but I will. You can find me at Blake. Flexlor at all social media. I feel like this is uncool. We're talking about anything after seek out. Well, let's do, I was like to seek actual, like help that's not a bot.
Yeah, find me on app like what's or in all social media. It's psychotic as I feel right now plugging anything. That's where I post all my videos, tour dates, and my special Daddy Long Legs is available on YouTube for free. Hell yeah. Hell yeah. Check out Daddy Long Legs. Check out Blake Weirder. And, you know, gradually lose your mind to a chat bot that some guy programmed in order to get really rich destroying the ability of furries to monetize their horniness.
You know? Ultimately, isn't that what a, I, open AI really is? I mean, I hope that God will.
No, no, no, I support the fur, for a long time. It's a dire time for people earning money from horniness. The, the puritens of our culture are making that a lot harder. You know, not in the way that the horny people want the bad kind of part. Anyway, I'm going to, I'm going to end now. And global warming is making it hard on furries as well. Right. Right. It's all, it's all come together. That's all right. We're done.
Behind the bastards is a production of coazon media. For more from coazon media, visit our website coazonmedia.com or check us out on the I heard radio app, apple podcasts, or wherever you get your podcasts. Full video episodes behind the bastards are now streaming on Netflix dropping every Tuesday Thursday. Hit remind me on Netflix. You don't miss an episode. For clips and our older episode catalog continue to subscribe to our YouTube channel, youtube.com/at behind the bastards.
We love about 40% of you, statistically speaking. On the look back at a podcast. The next in 79, that was a big moment for me. 84 is big to me. I'm Sam Jay, and I'm Alex E. Grish. Each episode we pick a here, unpack what went down, and try to make sense of how we survived it. With our friends, fellow comedians and favorite authors, like Mark Lamont Hill on the 80s.
“They get it. It was a wild year. I don't think there's a more important year for black people.”
Listen to look back at it on the I heard radio app, apple podcasts, or wherever you get your podcasts. Imagine an Olympics where doping is not only legal, but encouraged. It's the enhanced games. Some call it grotesque. Others say it's unleashing human potential. Either way, the podcast's superhuman documented it all, embedded in the games, and with the athletes for a full year. Within probably 10 days I'd put on 10 pounds. I was having troubles
stopping the muscle growth. Listen to superhuman on the I heard radio app, apple podcasts,
Or wherever you get your podcasts.
hard way, where your favorite therapist and host care games. This space is about black men's
“experiences, having honest conversations that it's really not safe to have anywhere,”
but you're having them with a licensed professional who knows what he's doing,
how many men carry a suit or armor. It's similar to the world that you're not to be played with,
“and just because you have the capability that does not mean that you need to.”
Listen to learn the hard way on the I heard radio app, apple podcasts, or wherever you get your
podcasts. My mother-in-law spent years sabotaging our relationship until karma made her paper
“it. All right, so if you tell me about how we started this story. She moved in for two weeks,”
lasted five days, left a mess, and then pressed her ear against their bedroom door and burst in screaming. When kicked out to a hotel, she called her son-in-law's workplace, pretending his partner had been rushed to the hospital by ambulance. She paked a medical emergency and spoiler that was just the beginning to find out how it ends. Listen to the okay story-time podcast on the I-Hard Radio app, apple podcasts, or wherever you get your podcasts. This is an I-Hard podcast.
Guaranteed Human


