The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

Sewell knew that “Dany,” as he called the chatbot, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back. (And if he ever forgot, there was the message displayed above all their chats, reminding him that “everything Characters say is made up!”)

But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    12 days ago

    The warning is a joke. Alike printing “Smoking kills” on ciragette packages, just that even less people care. And I doubt that sentence is going to change anything in a legal battle.

    I’m like half convinced.
    I think the dynamics are the same as with other things. Sometimes we like to escape reality. That can be done by reading books, watching TV or playing computer games. Or social media or watching some twitch streamer daily. I believe the latter is called parasocial interaction. It becomes an issue once done excessively. Or the lines get blurry. Or mental issues get into the mix.

    Certainly AI chatbots are more convincing than some regular old book. (Allegedly already in 1775 we had young people commit copycat suicide after reading Goethe’s “The Sorrows of Young Werther”, so it’s not a new topic.) But an AI can get to you and exploit your individual needs and wants and really get to you. I read the effects are currently being studied. I skimmed some long papers, but it seems we don’t have a final answer, yet. About what that does psychologically.

    I’ve tried roleplaying with AI. And I’ve also tried loading those characters like the famous AI therapist and pop culture characters. For me, it’s pretty clear it’s just a game. All of the interaction happens through text on the screen, I can’t touch them or talk to them verbally (yet). I’ve heard from some other people here on Lemmy, they don’t like the experience that is alike some pen and paper game… And I know how these things work, and that my hypothetical AI girlfriend is just a dream. So I don’t think I’m at harm. And I don’t think lots of other people are. But… obviously some people are. This isn’t the first article about people getting harmed. And I can see how you wouldn’t be able to defend yourself against some chatbot if you have serious issues or a mental condition.

    I still think we can’t skip all the other factors at play. We need to address (teenage) loneliness, guns and not having a caring and healthy social/human environment. A proper education and giving people some knowledge how these things work and what they are, would certainly help, too. It’s always the same story. We leave people alone, without education, without a healthy social environment, the people close to them miss how much they’re struggling, there is guns laying on the desk…

    And after the inevitable happened, we don’t address any of that. But completely focus on one topic that’s more symptom then cause. And that’s why I’m annoyed by the article.

    (But I get there is some risk specific to chatbots that goes beyond other things. And it’s probably not just symptom, but also contributing factor. We’d need more non-sensationalist information to judge…)