OpenAI have claimed that a teen violated terms that prohibit discussing suicide or self-harm with the chatbot after the AI allegedly encouraged him to take his own life.
And I bet teens are going to go on violating the TOS. Maybe we better restrict AI to people who could potentially read and understand the TOS if your product is so dangerous that lifesaving instructions are contained within.
You mean like with some kind of government app that scans your face or takes your ID? I guess open source llms you can run at home will have to be made illegal, or at least require special permits.
And I bet teens are going to go on violating the TOS. Maybe we better restrict AI to people who could potentially read and understand the TOS if your product is so dangerous that lifesaving instructions are contained within.
You mean like with some kind of government app that scans your face or takes your ID? I guess open source llms you can run at home will have to be made illegal, or at least require special permits.
hopefully
The things they were saying are bad.