- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.
But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.
The situation is tragic… their attempt to hide their ToS on that is fucking hilarious.
It is scary how the AI can’t assist you with sexual fantasies/roleplays but can assist with that, even though I’m curious what the logs are because I think OpenAI is at least smart enough to tell you “Hey, please don’t do that here’s some numbers” even if you push it I think.
“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.
How the fuck is OpenAI’s mission relevant to the case? Are suggesting that their mission is worth a few deaths?
Sure looks like it.
Get fucked, assholes.
“All of humanity” doesn’t include suicidal people, apparently.
To be fair as a society we have never really cared about suicide. So why bother now (I say as a jaded fuck angry about society)
I think they are saying that her suicide was for the benefit of all humanity.
Getting some Michelle Carter vibes…
That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.
I’d say it’s more akin to a bread company saying that it is a violation of the terms and services to get sick from food poisoning after eating their bread.
Yes you are right, it’s hard to find an analogy that is both as stupid and also sounds somewhat plausible.
Because of course a bread company cannot reasonably claim that eating their bread is against terms of service. But that’s exactly the problem, because it’s the exact same for OpenAI, they cannot reasonably claim what they are claiming.
Yeah this metaphor isn’t even almost there
They used a tool against the manufacturers intended use of said tool?
I can’t wrap my head around what I’m you’re saying, and that could be due to drinking. Op later also talked about not being the best metaphor
If the gun also talked to you
So why can’t this awesome AI be stopped from being used in ways that violate the TOS?
TOS > Everything.
The police also violated my Terms of Service when they arrested me for that armed bank robbery I was allegedly committing. This is a serious problem in our society people; something must be done!
Fucking.WOW.
Sam Altman just LOVES answering stupid questions. People should be asking him about this in those PR sprints.

Does the synthesis of D-Lysergic Acid work against the terms of service if you ask for a mind-bending experience?








