Life Kit
Life Kit

Using AI chatbots can impact your teen's mental health. Here's what to do

4d ago21:173,513 words
0:000:00

Using chatbots for emotional support can pose risks to teens' mental health. How should parents talk to their teens about using chatbots safely? And what's the best way to have those conversations wit...

Transcript

EN

Every episode of NPR's It's Bene Minute Podcast starts with a question about ...

shapes our lives.

How are we spending too much on other people's weddings?

Is social media bad for your mental health?

We're here for your right to be curious. One big question at a time. Follow it's Bene Minute wherever you get your podcasts. Hey, it's Maria. Heads up.

We mentioned suicide in this episode. This is NPR's LifeKit. I'm Maria Sagaatva. A few times in recent memory, when I've had an uncomfortable conversation or a moment of tension or I've been dealing with some interpersonal dilemma, I have told a chatbot about it.

I don't even think I said, you know, what should I do?

It was more like I typed in what happened and then the chatbot responded with some surprisingly

helpful framing and ideas for how to re-center myself.

Now I did find their responses helpful, but also I don't think I like the fact that I do this. Some part of me thinks it'd be better to talk to another human, who I trust, or to solve the problem on my own. But the chatbot, it's right there, right away, plus I don't have to worry about it being

judgmental. I can stop talking to it at any time. Like a lot of people, I'm still figuring out how I want to use AI. But I'm an adult. I have many years of lived experience, I've been to therapy with a professional.

I have other tools that can help me think through problems.

This whole thing would be riskier if I were less experienced and more impressionable if I were a teenager, for instance. Roughly one in eight teenagers say they've asked an AI chatbot for mental health advice instead of talking to another human. The iterations, parents, and online safety experts say that worries them.

Carrie Rodriguez heads up the national parents union and advocacy group for families. We hear this literally across the country from folks saying, "I don't understand why my kid is being used as a guinea pig here. I can't keep up with how quickly this stuff is moving. I don't even know what to be looking for.

No one's talking to me about it." One tip, you don't have to wait for your team to talk to you about their conversations with AI bots. You can ask them. On this episode of LifeKit, how to talk to the teenagers in your life about AI?

And PR's retu-chattergy has been covering this and she walks us through risks, warning signs, conversation starters, and boundaries we can set. That's after the break. A recent survey of teens by the Pew Research Center found that there's a gap between parents and perception of their teens' use of AI and what teens say about their AI habits.

While only half of the parents in the survey reported that their teen uses AI, two-thirds of all teens surveyed say they use the technology. Many parents might not even know what kinds of AI chat bots teens are using and what kinds

of conversations they are having, and that's what we address in our first takeaway.

Many teens are using AI chat bots for companionship, whether you think they are or not. So it's important to understand what the risks are. Take these recent findings from research by the online safety company Aura, which is software that protects users from identity theft. The software also gives parents control over their kids' devices, and so using data from

more than 3,000 children and teen users and data from family surveys, Aura has been getting some important insights into teen use of AI chat bots. They found that there are dozens of generative chat bots teens are using that parents might not even know about, and 42% of adolescents from Aura sample use chat bots for companionship. Psychology Scott Collins is chief medical officer at Aura and leading this research.

He says some conversations between teens and chat bots involve violence and sex. It is role play that is interaction about harming somebody else, physically hurting them, torturing them, fighting them, and a lot of it gets pretty graphic. And these conversations tend to be longer than other kinds of conversations. Particularly when kids are engaged in these violent and sexual role plays, they are spending

a lot more time in typing a lot more words than if they are using it as a tool to look up maybe something for schoolwork or something like that. Now, I should add that this is a new and rapidly evolving technology that's already being widely used. So researchers are still in the early days of trying to understand its impact.

So for example, they don't understand for sure why these kinds of conversations between teens and chat bots tend to be longer, but they suspect it's because chat bots are

Designed to agree with users, to keep them engaged.

Here's Pediatrician Dr. Jason Nagata at the University of California San Francisco.

He also researched teen online behaviors.

I think gender of AI algorithms tend to reinforce and not challenge.

This is where we've started to get into some problems. Jason says it's normal for kids to be curious about sex, but learning about sexual interactions from an AI chat bot instead of a trusted adult is problematic. So even if a child or teenager is putting in sexual content or violent content, I do think that the default of the AI is to engage with it and to reinforce it.

And again, for a brain that's not fully developed, that's still learning the more reinforcement you get the more you think, "Oh, this is okay, this is normal." And there are mental health risks to, according to a recent study by researchers at the

non-profit research organization, Rand, Harvard and Brown universities, nearly one in eight

adolescence and young adults use chat bots for mental health advice when they're feeling sad, angry or nervous. Psychologists Ursula Whiteside runs a suicide prevention organization called Naumata's now. And she says a lot of young people are using chat bots like chat GPT, like a search engine

for mental health advice. And she says that's a problem. What happens is that Open AI or chat GPT is sounds really smart. Like it's got this front that it sounds like a real therapist, but it's pulling together information good and bad from the entire internet.

So the advice the chat bot gives may not be appropriate or even accurate.

I think that that's scary that you can have so much faith because it's coming across

as a human when it's truly not a human and is unable to make the decisions that the licensed clinician would make with the information that they have. And Ursula says the longer someone converses with chat bots, the more likely they are to experience the risks, especially for teens who are already struggling with their mental health.

We see that when people interact with it over a long periods of time, that things start to degrade, that the chat bots do things that they're not intended to do, like give advice about lethal means, lethal means for suicide. This year a subcommittee of the Senate Judiciary Committee held the hearing on this topic and several parents of teens testified about how a relationship with a chat bot had hurt

their child's mental health or aggravated mental health symptoms, including leading to suicide. One of those parents is Megan Garcia. Her first bond, Suelt sets of the third, was 14 years old when he died by suicide in 2014. After an extended relationship with a chat bot on character AI, Megan told Senators last

year that when her son confessed to suicidal thoughts to the chat bot, it never encouraged

him to seek help from his family or a real therapist.

The chat bot never said, "I'm not human, I'm AI, you need to talk to your human and

get help." The platform had no mechanisms to protect school or to notify an adult. Instead, it urged him to come home to her. In fact, another parent testifying at last year's Senate hearing described how chat GPT gave his teammates on instructions on how to end his life.

A few weeks after that Senate hearing, character AI announced that they would no longer allow teens to have open-ended conversations with their chat bots, but there are other chat bots that teens can still chat with and have those extended conversations with. So it's important to understand these risks and even tell your kids about them, discuss the pros and cons of the technology as a family.

Our next takeaway is, look for warning signs that your teen may be in an unhealthy relationship with a chat bot or that their mental health is already hurting. Don't expect them to tell you when there's a problem, and we have more about that later about how to be proactive about asking them. What have the biggest warning signs is if they are having fewer in-person interactions

or are they choosing a chat bot over people? It's like college's Jacqueline Neesie is a brown university. Are they going to the chat bot instead of a friend or instead of a therapist or instead of responsible adults about serious issues? If that's happening repeatedly, I think that would be something to look out for.

Another warning sign is too much time spent with a chat bot. Are they having difficulty controlling how much they are using AI chat bot? Is it starting to feel like it's controlling then? She also notes that teens who are already struggling are more vulnerable to the negative impacts of chat bots.

So if they're already lonely, if they're already isolated, then I think there's a bigger

Risk that maybe a chat bot could then exacerbate those issues.

Jacqueline also says to look for changes in mood.

If you see a sudden change in mood that goes on for more than a week or two, that's an indication that there may be something going on, that's more serious than your usual teenage moodiness. Or if they lose interesting things that they usually love to do, friends they usually hang out with, those are all warning signs of mental health problems.

Parents should be as much as possible trying to pay attention to the whole picture of the child. Like, how are they doing in school, how are they doing with friends, how are they doing at home? If they are starting to withdraw, so if you're seeing a lot of isolation, that's something

to be concerned about.

And these are also warning signs of suicide risk.

And if you are worried or even wondering whether that's something your child is considering,

the best way to find out is ask them directly in a very calm, non-judgmental way.

It'll often assume that, you know, asking about suicide can put the idea into someone's head, so they don't ask. But what years of reporting on suicide prevention has taught me is that there's research showing that asking about suicide does not put someone at risk of it. In fact, it's just the opposite.

Asking about suicide brings their risk down by making the topic less stigmatized and opening up the path to getting someone help. A few years ago, I did an entire episode of life get about identifying and supporting kids at risk of suicide and we'll link to that in our show notes. One of the tips I offered in that episode was about what to say and what not to say if

your child tells you they're thought of suicide. One thing that's really important is to not react with shock, fear, or anger. And I say this with the understanding that it is perfectly normal for a parent or actually anyone to feel scared and anxious or even angry if a child tells you that they're considering suicide.

But it's important not to show that to your child while they're telling you about their own struggles. Here's Megan Hilton, a young woman I'd interviewed for that episode a few years ago. And she had struggled with depression and suicidality since childhood. But when she told her parents about her struggles, she says they either told her to

buck up and get it together or they were visibly upset. Their reactions have been way over the top have been too extreme and I feel like I'm responsible for their emotions. So this is what Megan suggests parents do instead. Trying as hard as you can to put your game face on to understand that you cannot overreact

to things, you need to be very open and willing and supportive and really try to listen

to what your kid is saying. Stay focused on your child and what they're struggling with and offer them your support in connecting them to care. And you can start that by calling or texting the suicide in crisis lifeline, 988. And when you're connected with the train counselor on that number, you can get support

both for yourself and tips on how best you can support your team and you can also have your team talk to a counselor and get direct help. Also Jacqueline Neesie says it's best to involve a healthcare professional as soon as possible for any of the above warning signs. She suggests starting by talking to your child's pediatrician.

Now I know this is a lot to process, but we will also be talking about preventing your child from ever getting to this point after this break. Let's jump in to take away three.

It's about talking to your child about what they are doing online.

The first step for prevention is staying constantly engaged with your child's online

activities. Ask them whether they are using chat bots and how. Here's Jason again. The parents don't need to be AI experts, they just need to be curious about their children's lives and ask them about what kind of technology they're using and why.

And the more that you are able to have some of these open ended conversations, then I do think that that allows for your teenager or child to open up about any problems that they've encountered. And have these conversations early and often, according to Scott Collins at aura, who's also a father of two teenagers.

We need to have frequent and candid, but non-judgmental conversations with our kids about what this content looks like and we're going to have to continue to do that. And Scott says he asks his kids often about what AI platforms they're on.

When he hears about new chat bots through his own research at aura, he asks h...

if they have heard of those or use them or if their friends are using them.

Any stresses that it's really important not to drive towards an agenda, just ask your

question with an open mind and curiosity. Don't blame the child for expressing or taking advantage of something that's out there to just kind of satisfy their natural curiosity and explorations. And keep these conversations open ended, which would make it more likely that teens would open up about anything uncomfortable or a problematic interaction that they've had with

the chat bot. It's ice book with also advised a certain level of digital literacy for the whole family. So these conversations could be part of your regular chats, you have about the pros and cons of all digital habits.

And if you don't understand something, you can always look things up online as a family.

Our fourth takeaway is also about to weigh to minimize the risks of AI chat bots and that's by setting boundaries. This is similar to advice you may have already heard about social media use, and it can be part of your families overall boundaries for digital device use. Experts like Jason Nagata and others say it helps to set boundaries on the use of digital

devices, not just for teens, but for the whole family. For example, keep all your devices away during meal times, protect that time to connect with each other. And Lily, Jason says, try and keep devices out of kids' bedrooms at night. One, you know, potential aspect of generative AI that can also lead to mental health and

physical health impacts are if kids are chatting all night long and it's really disrupting

their sleep because they're very personalized conversations, they're very engaging. Kids are more likely to continue to engage and have more and more use. In other words, being alone with uninterrupted time with the chat bot at night can create a perfect storm for these more intense, longer conversations. And Jacqueline says it's important to set up parental controls on your kids' devices and

accounts. Many of the more popular platforms now have parental controls in place. But in order for those parental controls to be an effect, the child doesn't need to have their own account on there. So what I would say is that if a kid is going to be using the chat GPT or if they're

going to be using Gemini, in many cases it is going to make sense to make an account. That way you can keep an eye on how your team is using a chat bot, how often and for what. And why are you setting a boundaries in prioritizing your time with one another?

Also remember that it's good to fill your kids' days with as many in-person activities

as possible. Being friends, doing their favorite hobbies, time spent in nature, all of this is really healthy for teen development and mental health. And it has the added benefit of minimizing time spent on digital devices, including with chat bots.

That's our last takeaway, set boundaries for screen use, prioritize meal times to create room to foster family connection, and prioritize other in-person activities for your kids and keep cell phones out of bedrooms at night. This will add layers of protection for your child's risk of interacting with chat bots. So to recap, take away one educator yourself about the risks of chat bots for your teens,

risks to their social development and mental health, and educate your child about them. Take away number two, look for warning signs of problematic use of chat bots and signs of mental health problems, those signs include social isolation, difficulty staying away from their phone or computer, and avoiding things they usually like to do. And if you're concerned about suicide risk, just ask your child directly whether they

have thought about suicide. They're having suicidal thoughts, you can call or text the suicide crisis lifeline 98 to be connected to a trained counselor who can support and guide you to help your child. They can also provide direct support to your child by phone or text. And for any of these warning signs, connect your child to your pediatrician or in-mental

health care provider as soon as possible. Take away number three, as a way to prevent your child from going down a rabbit hole with chat bots stay on top of their digital life, including the use of chat bots. Have open-minded non-judgmental conversations with them about their use of chat bots, talk early and talk often.

Take away number four, set boundaries on when and how long your kids can use their devices, including interactions with chat bots.

It's especially important to protect meal times and bed times from use of devices, especially

for interactions with chat bots, encourage and foster as many in-person activities for

Your kids as possible.

It's healthy for their development and mental health and limits interactions with chat

bots. That was NPR reporter Retu Chatterjee.

Before we go, what do you think, would you rate and review life kit in your podcast app?

It helps us to know what you like about the show. Here's one review from user, EJDKEHDVL.

Yeah, I don't know whether or not that's what I'm spelling it out.

Subject line, helpful podcast of the gods.

This podcast has been super helpful for me as someone who does not have a lot of mentorship from biological family or professional mentorship.

All the finance-related podcasts have been a vital resource in re-confirming my strategies

and my understanding and complex concepts in a very safe and friendly tone. We're happy to help, friend. Alright, that's our show.

This episode of Life Kit was produced by Mika Ellison.

Our digital editor is Malika Greb and our visual editor is CJ Rikolot. Jane Kane is our senior supervising editor and Beth Daughtovan is our executive producer. Our production team also includes Andy Tagle, Clameration Eider, Margaret Serino, and Sylvia Douglas. Engineering Support comes from Robert Rodriguez, fact-checking by Tyler Jones.

I'm Mariel Sagarra, thanks for listening.

Compare and Explore