• nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    A chatbot is a tool, nothing more. Responsibility, in this case, falls on the people who deployed a tool that wasn’t fit for purpose (in this case, the sympathetic human conversational partner that the AI was supposed to mimic would have done anything but what it did—even changing the subject or spouting total gibberish would have been better than encouraging this kid). So OpenAI is indeed responsible and hopefully will end up with their pants sued off.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Yeah that’s the problem with how they are marketing it. It’s a tool for expert use, not laymen.

      I don’t think the problem is ChatGPT itself — it just does what it does and folks get what they get, but it’s definitely a problem that people aren’t being informed about what it can and can’t do (see all the people asking it to count letters and those who think they’ve hacked the system prompt because the AI said they did).

      In this case, the user is asking ChatGPT to act as a friend and confidante, and that’s something it can’t do and a use case impossible to detect. The user simply has to understand it lacks any qualities required for a relationship of any kind. Everything a user says is simply input to a mathematical model that wants to complete it with something a human might say.

      So it responds to a fictional scenario I might be writing for a book or game exactly the way it responds to a user looking for companionship. There is no way to tell the difference without genuine understanding rather than just token vector comparisons.

      It’s like fire. A user can buy and use a lighter, and fire can act like a friend when you’re cold or hungry, but it’ll burn you off you try hugging it.