• queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    9 hours ago

    That’s exactly why we can’t really call them intelligent or knowledgeable. They’re pattern recognition engines, they mindlessly recognize and repeat patters even when they don’t make any sense i.e. “hallucinate”

    They’re a productivity tool that can help actually intelligent and knowledgeable beings like humans do tasks, but on their own they are a parking lot covered with shredded dictionaries. If we use the Chinese room analogy, it’d be like trying to build a Chinese room with just the translation dictionary and without the human to do the translating.

    Which is why LLMs make mistakes when translating too - they need a human, a real intelligence, to check.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      9 hours ago

      Humans are also “pattern recognition engines”. That’s why optical illusions and similar completely mess with our brains. There are patterns that we perceive as moving/rotating even though the pattern is completely stationary.

      But nobody would claim that you can’t trust your eyes in general just because optical illusions exist.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        9 hours ago

        We can tell optical illusions are fake specifically because we aren’t just pattern recognition engines.

        LLMs “hallucinate” because they can’t do that. To them, the optical illusion is reality.

        That’s the difference between intelligence and knowledgeability, instead of merely containing knowledge.