“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X’s “Safety” account claimed that same day.
It really sucks they can make users ultimately responsible.
I think it’s wrong that they carry no liability. At the end of the day they know the product can be used this way, they haven’t implemented any safety protocols to prevent this, and while the users prompting Grok are at fault for their own actions, the platform and AI LLM are being used to facilitate it where other AI LLM’S have guard rails to prevent it. In my mind that alone should make them partially liable.
It really sucks they can make users ultimately responsible.
I think it’s wrong that they carry no liability. At the end of the day they know the product can be used this way, they haven’t implemented any safety protocols to prevent this, and while the users prompting Grok are at fault for their own actions, the platform and AI LLM are being used to facilitate it where other AI LLM’S have guard rails to prevent it. In my mind that alone should make them partially liable.
And yet they leave unfettered access to the tool that makes it possible for predators to do such vile shit.
It’s almost like the program is the source of the issue…
It’s almost like you don’t understand that the program was created by men.
And naive enough to think they didn’t know it would be used that way.
Go you, I guess.
So you agree, the man behind the program is the actual source of all of this?
Go back and read all of the earliest posts from me in this thread. See if you can figure it out on your own.
Because I’m now realizing this whole space, fuck_ai, is being literal and none of you can read for comprehension because of the AI addiction.
Oh no, is this some kind of pedo-core mating ritual using AI?
I’m waaay too old for you son. Go dry hump a couch or something.