• WatDabney@sopuli.xyz
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    edit-2
    4 days ago

    I’d say that calling what they do “hallucinating” is still falling prey to the most fundamental ongoing misperceptions/misrepresentations of them.

    They cannot actually “hallucinate,” since they don’t actually perceive the data that’s poured into and out of them, much less possess any ability to interpret it either correctly or incorrectly.

    They’re just gigantic databases programmed with a variety of ways in which to collate, order and regurgitate portions of that data. They have no awareness of what it is that they’re doing - they’re just ordering data based on rules and statistical likelihoods, and that rather obviously means that they can and will end up following language paths that, while likely internally coherent, will have drifted away from reality. That that ends up resembling a “hallucination” is just happenstance, since it doesn’t even arise from the same process as actual “hallucinations.”

    And broadly I grow increasingly confident that virtually all of the current (and coming - I think things are going to get much worse) problems with “AI” in and of itself (as distinct from the ways in which it’s employed) are rooted in the fundamental misrepresentations, misinterpretations and misconceptions that are made about them, starting with the foundational one that they are or can be in any sense “intelligence.”