• Eq0@literature.cafe
    link
    fedilink
    arrow-up
    27
    ·
    8 days ago

    The chatbots are trained in the “yes and” model of conversation. They can’t (?) say “no, you are a lunatic and this is insane” or any other milder variations of it, they only say “yes, you are right”. This stokes one’s ego, but obviously creates problems

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 days ago

      I’ve only used ChatGPT, but right off the bat it will almost always tell you if you’re wrong. You have to go down the rabbit hole to get it agreeing to insane shit.

      • Eq0@literature.cafe
        link
        fedilink
        arrow-up
        7
        ·
        7 days ago

        Odd, I used chatgpt as well and got insane “yes sure you can do it like like” from the get go (asking about a git problem, got git commands that either didn’t exist or didn’t do what Chatgpt was claiming they did - turns out what I wanted to do could not be done with git)