• fizzle@quokk.au
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X’s “Safety” account claimed that same day.

    It really sucks they can make users ultimately responsible.

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      I think it’s wrong that they carry no liability. At the end of the day they know the product can be used this way, they haven’t implemented any safety protocols to prevent this, and while the users prompting Grok are at fault for their own actions, the platform and AI LLM are being used to facilitate it where other AI LLM’S have guard rails to prevent it. In my mind that alone should make them partially liable.

    • GhostPain@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      4 days ago

      And yet they leave unfettered access to the tool that makes it possible for predators to do such vile shit.