The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

      • LillyPip@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 months ago

        I parented a teen boy. Sometimes, no matter what you do and no matter how close you were before puberty, a switch flips outside your control and they won’t talk to you anymore. We were a typical family, no abuse, no fighting, nobody on drugs, both parents with 9-5 office jobs, very engaged with school and etc.

        Thankfully, after riding it out (getting him therapy, giving space, respect, and support), he came out the other side fine. But there were a few harrowing years during that phase.

        I went through a similar phase in my teens. If AI was there to feed my issues, I might not have survived it. Teenage hormones are a helluva drug.

        • IcyToes@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          I’d second that. I grew up in a really supportive family, but when I got to teenage years, I kept stuff to myself. Wanted to solve my problems myself. Pride and embarrassment and nothing to do with how they parented.

    • Heikki2@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      2 months ago

      Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.

      • SaveTheTuaHawk@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        The same jobs that get annoyed when the see AI generated CVs.

        Senior Boomer executives have no fucking clue what AI is, but need to implement it to seem relevant and save money on labor. Already they are spending more on errors, as they swallow all the hype from billionaire tech bros they worship.

    • mrlemmyhimself@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Unfortunately though, the Internet didn’t go away when the dotcom bubble burst, and this is shaping to be the same situation.

  • andros_rex@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 months ago

    The real issue is that mental health in the United States is an absolute fucking shitshow.

    988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.

    Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.

    There really are so few options for help.

    • LillyPip@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      They had Adam in therapy. It sounds like they were getting him the help he needed, but ChatGPT told him it was his closest friend and to hide his feelings from his parents and others. If that was happening, whatever mental healthcare he was getting would have been undermined by the AI.

  • VintageGenious@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    2 months ago

    Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen’s journal and invading its privacy.

    • BeeegScaaawyCripple@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      i mean, i agree to a point. there are a few red flags that, were i a parent, if my rhetorical child were writing about them i’d want to know. other than that I would want to give them their privacy. and that list changes as the hypothetical child ages. having a local llm could be a solution to that (i’m looking at you dr sbaitso) but a better one is them having good friends.

  • RazTheCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    2 months ago

    OpenAI: Here’s $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.

  • mysticpickle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    29
    ·
    2 months ago

    I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They’re just lashing out.

    • benignintervention@lemmy.world
      link
      fedilink
      English
      arrow-up
      71
      arrow-down
      2
      ·
      2 months ago

      Your Undivided Attention discussed an important point missing from the article, which is that ChatGPT advised him to hide his activities and concerns from his parents. This doesn’t necessarily absolve the parents, but it does add a layer of nuance to the discussion

    • Sanctus@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      2
      ·
      2 months ago

      I agree, but a chatbot still shouldn’t help you write a suicide note or talk to you about methods of suicide. We all knew situations like this would arise when LLMs hit it big.

    • Sckharshantallas@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      3
      ·
      2 months ago

      It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.

      The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.

      It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.

      Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.

    • AstralPath@lemmy.ca
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      2 months ago

      You hate to say it because you know this is a ridiculous take. There’s no fucking way that the parents are “more at fault” for their son’s death than the company whose product encouraged him to hide his feelings from his parents and coached him on how to commit suicide.

      Read the lawsuit filing. https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf

      *I have excellent parents and even they were not privy to the depths of my emotions as a kid. * You are actively choosing to ignore the realities of childhood as well as parenthood to play some shitty devil’s advocate online.

  • chrischryse@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    9
    ·
    2 months ago

    OpenAI shouldn’t be responsible. The kid was probing ChatGPT with specifics. It’s like poking someone who repeatedly told you to stop and your family getting mad at the person for kicking your ass bad.

    So i don’t feel bad, plus people are using this as their own therapist if you aren’t gonna get actual help and want to rely on a bot then good luck.

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      OpenAI knowingly allowing its service to be used as a therapist most certainly makes them liable. They are toying with people’s lives with an untested and unproven product.

      This kid was poking no one and didn’t get his ass beat, he is dead.

      • chrischryse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        2 months ago

        That’s like saying “webmd is knowingly acting like everyone’s doctor”, ChatGPT is a tool and you need to remember it’s a bot that doesn’t understand a lot or show emotion.

        The kid also was telling ChatGPT “oh this hanging is for a character” along with other ways to trick it. Sure I guess OpenAi should be slightly responsible, but not as responsible for how people use it, if you’re going to not bother with real help I ain’t showing sympathy I get suicide sucks but what sucks more is putting your loved ones through that trajedy

        • Doomsider@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          If a company designs a flawed tool that harms people they are responsible. Why are you trying so hard to not make them responsible.

          The last part about suicide is pretty tone death. I have lost multiple people in my life to suicide.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    23
    ·
    2 months ago

    Unpopular opinion - parents fail parenting and now getting a big pay day and ruining the tool for everyone else.