• danc4498@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    4
    ·
    5 days ago

    The problem is that AI is very convincing. And it’s right more than it’s wrong. So people see the answers that feel right.

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      5
      arrow-down
      5
      ·
      5 days ago

      If you think LLMbeciles are right more often than wrong then you’re either profoundly ignorant or profoundly inattentive.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I’ve found ChatGPT to almost never be wrong, can’t think of an example ATM. Having said that, I have a sense for what it can and can’t do, what sort of inputs will output a solid answer.

        Where it goes hilariously sideways is if you talk to it like a person and keep following up. Hell no. You ask a question that can be answered objectively and stop.

        No way the output went straight to, “Sure! Bromine’s safe to eat.” Either he asked a loaded question to get the answer he wanted or this came after some back and forth.

      • danc4498@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        5 days ago

        Are you an AI bot? Or have you literally never used chat gpt? It’s accurate way more than 50% of the time.

          • danc4498@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            A 69% D+ student that writes VERY convincingly. Keep in mind, we live in a world where people buy into pseudoscience and bullshit conspiracy theories because they are convincing. I think it’s just human nature.