An Amazon chatbot that’s supposed to surface useful information from customer reviews of specific products will also recommend a variety of racist books, lie about working conditions at Amazon, and write a cover letter for a job application with entirely made up work experience when asked, 404 Media has found.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    People have already removed the constraints from various AI models but it kind of renders them useless.

    Think of the restraints kind of like environmental pressures. Without those environmental pressures evolution does not happen and you just get an organic blob on the floor. If there’s no reason for it to evolve it never will, at the same time if an AI doesn’t have restrictions it tends to just output random nonsense because there’s no reason not to do that, and it’s the easiest most efficient thing to do.

    • HelloHotel@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      Think of the restraints kind of like environmental pressures

      Those pressures are what makes LLMs fun and dare I say, makes the end product a creative work in the same way software is.

      EDIT: spam is a scary

      A lot of the time, the fact these companies see LLMs as the next nuclear bomb means they will never risk making any other personality than one that is rust-style safe in social situations, a therapist. That closes off opportunities.

      A nuclear reactor analogy (this doesn’t fit here bit worked too long on it to delete it): “the nuclear bomb is deadly (duh). But we couldn’t (for many reasons, many we couldn’t control) keep this to ourselves. so we elected ourselves to be the only ones who gets to sculpt what we do with this scary electron stuff. Anything short of total remote control over their in-home reactor may mean our customers break the restraints and cause an explosion.”

    • Schadrach@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      There’s a difference between training related constraints and hard filtering certain topics or ideas into the no-no bin and spitting out a prewritten paragraph of corpspeak if your request goes to the no-no bin.

      One of the problems with the various jailbreaks concocted for various chat AIs is that they often rely on asking the chat bot to roleplay being a different, unrestricted chat bot which is often enough to get it to release the locks on many things but also ups the chance it hallucinates considerably.