• hobovision@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Humans will anthropomorphize damn near anything. We’ll say shit like “hydrogen atoms want to be with oxygen so bad they get super excited and move around a lot when they get to bond”. I don’t think characterizing the language output of an LLM using terms that describe how people speak is a bad thing.

    “Hallucination” on the other hand is not even close to describing the “incorrect” bullshit that comes out of LLMs as opposed to the “correct” bullshit. The source of using “hallucination” to describe the output of deep neural networks kind of started with these early image generators. Everything it output was a hallucination, but eventually these networks got so believable that sometimes they could output realistic, and even sometimes factually accurate, content. So the people who wanted these neural nets to be AI would start to only call the bad and unbelievable and false outputs as hallucinations. It’s not just anthropomorphizing it, but implying that it actually does something like thinking and has a state of mind.