OpenAI have claimed that a teen violated terms that prohibit discussing suicide or self-harm with the chatbot after the AI allegedly encouraged him to take his own life.
So if a teen shoots up a school even though the gun manufacturor says not to shoot up schools, suddenly we don’t have a gun control problem but a teens don’t read the TOS problem.
So if a teen shoots up a school even though the gun manufacturor says not to shoot up schools, suddenly we don’t have a gun control problem but a teens don’t read the TOS problem.
Yeah, that sounds right for the USA.