Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

  • gian @lemmy.grys.it
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    11 hours ago

    I would say that it is more like a software company putting in their TOS that you cannot use their software to do a specific thing(s).
    Would be correct to sue the software company because a user violated the TOS ?

    I agree that what happened is tragic and that the answer by OpenAI is beyond stupid but in the end they are suing the owner of a technology for a uses misuse of said technology. Or should we sue also Wikipedia because someone looked up how to hang himself ?

    That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.

    The gun company can rightfully say that what you do with your property is not their problem.

    But let’s make a less controversial example: do you think you can sue a fishing rods company because I use one of their rods to whip you ?

    • Pieisawesome@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      To my legal head cannon, this boils down to if OpenAi flagged him and did nothing.

      If they flagged him, then they knew about the ToS violations and did nothing, then they should be in trouble.

      If they don’t know, but can demonstrate that they will take action in this situation, then, in my opinion, they are legally in the clear…

      • BeeegScaaawyCripple@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        53 minutes ago

        depends whether intent is a required factor for the state’s wrongful death statute (my state says it’s not, as wrongful death is there for criminal homicides that don’t fit the murder statute). if openai acted intentionally, recklessly, or negligently in this they’re at least partially liable. if they flagged him, it seems either intentional or reckless to me. if they didn’t, it’s negligent.

        however, if the deceased used some kind of prompt injection (i don’t know the right terms, this isn’t my field) to bypass gpt’s ethical restrictions, and if understanding how to bypass gpt’s ethical restrictions is in fact esoteric, only then would i find openai was not at least negligent.

        as i myself have gotten gpt to do something it’s restricted from doing, and i haven’t worked in IT since the 90s, i’m led to a single conclusion.