OpenAI have claimed that a teen violated terms that prohibit discussing suicide or self-harm with the chatbot after the AI allegedly encouraged him to take his own life.
Interesting argument seeing as your product allowed and actively encouraged him to violate your own TOS.
Like it’s one thing to put in the TOS that talking about self harm isn’t allowed, but when your own products service is actively working outside the terms you laid out, then The terms of service is pointless.
Also I guess they don’t actively monitor violations of the TOS? Otherwise they’d be restricting service in some way.
Interesting argument seeing as your product allowed and actively encouraged him to violate your own TOS. Like it’s one thing to put in the TOS that talking about self harm isn’t allowed, but when your own products service is actively working outside the terms you laid out, then The terms of service is pointless.
Also I guess they don’t actively monitor violations of the TOS? Otherwise they’d be restricting service in some way.