• sexy_peach@feddit.org
    link
    fedilink
    arrow-up
    34
    ·
    4 days ago

    Someone said that calling their misinfo hallucinations is actually genius. Because everything they say is the hallucination, even the things we read and think are correct. The whole thing hallucinates away and then we go and say ok some of this makes a lot of sense, but the rest…

    So basically that’s why it will never be 100% correct all the time, because all of the output is just more or less correct hallucination.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    13
    ·
    3 days ago

    “LLMs Will Always Hallucinate”

    That’s literally all they do. EVERYTHING that an LLMbecile outputs is hallucinated. It’s just sometimes the hallucinations match reality and sometimes they don’t.

  • WatDabney@sopuli.xyz
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    edit-2
    4 days ago

    I’d say that calling what they do “hallucinating” is still falling prey to the most fundamental ongoing misperceptions/misrepresentations of them.

    They cannot actually “hallucinate,” since they don’t actually perceive the data that’s poured into and out of them, much less possess any ability to interpret it either correctly or incorrectly.

    They’re just gigantic databases programmed with a variety of ways in which to collate, order and regurgitate portions of that data. They have no awareness of what it is that they’re doing - they’re just ordering data based on rules and statistical likelihoods, and that rather obviously means that they can and will end up following language paths that, while likely internally coherent, will have drifted away from reality. That that ends up resembling a “hallucination” is just happenstance, since it doesn’t even arise from the same process as actual “hallucinations.”

    And broadly I grow increasingly confident that virtually all of the current (and coming - I think things are going to get much worse) problems with “AI” in and of itself (as distinct from the ways in which it’s employed) are rooted in the fundamental misrepresentations, misinterpretations and misconceptions that are made about them, starting with the foundational one that they are or can be in any sense “intelligence.”

  • cronenthal@discuss.tchncs.de
    link
    fedilink
    arrow-up
    20
    ·
    4 days ago

    Something that should have be clear for a while by now. It won’t get better, it can’t be solved. LLMs are quite limited in real life applications, the financial bubble around them is insane.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      4 days ago

      Well, there are some theoretical improvements laid out in papers. Not for hallucinations or the Tech Bro ish AGI dreams, but more adaptation, functional use, things like that.

      …But the incredible thing is that the AI houses with the money seem to be ignoring them.

      American firms seem to only pay attention to in-house innovations, like they have egos the size of the moon. And I’m only speaking of the ones not peddling the “scale transformers up infinitely” garbage.

      Chinese LLMs tend to be open weights and more “functionally” oriented, which is great. But (with a few exceptions) they’re still pretty conservative with architectural experimentation, and increasingly falling into traps of following/copying others now.

      Europe started out strong with Mistral (and the first good MoE!) and some other startups/initiatives, yet seems to have just… gone out to lunch? While still taking money.

      And regions countries like South Korea or the Saudis are still pretty small scale.


      What I’m saying is you are right, but it’s largely from an incredible amount of footgunning all the firms are doing. Otherwise models can be quite functional tools in many fields.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 days ago

        The point of “AI” is not making useful, functional software. Those technologies have existed for a long time and hopefully will continue to be developed by reasonable people.

        The new “AI” is about creating useful, functional rubes to take their money. It’s obvious just from the phony name “AI”. If these grifters are shooting themselves in the foot, it doesn’t seem to stop them from walking to the bank.

  • BradleyUffner@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    4 days ago

    Every single output from an LLM is a hallucination. Some hallucinations are just more accurate then others.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 days ago

    LLMs Will Always Hallucinate Make Errors, and We Don’t Need to Live With This

    These kind of headlines are just more grifter hype… as usual.

  • aesthelete@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    4 days ago

    I think these things are maybe more useful (in very narrow cases) when people realize a little about how they work. They’re probabilistic. So they’re great at bullshit. They’re also great at bullshit adjacent things (quarterly goal documents for your employer for instance). Knowing they’re probabilistic makes me treat them differently. For instance, I wanted to know what questions I should ask about something, and so I used Google AI insights or whatever from their search engine to generate a large list by simply resubmitting the same question over and over.

    It’s great at that kind of (often extremely useless) junk. You can get lots of subtle permutations out of it. It also might be interesting to just continually regen images using the same prompt over and over and look at the slight differences.

    It would be more interesting to me instead of bullshit like Sora if they made something that just gave you the prompts in a feed and allowed you to sit there and regenerate the junk by hitting a button. People could see the same post and a slightly different video every time. Or image. Still stupid? Yes. Still not worth slurping up our lakes for? Yes. But hey at least it’d be a little more fun.

    The prompts are also, for the most part, the only creative thing involved in this garbage.

    Instead of the current knobgobblers that want to take a single permutation and try to make it more than worthless, or want to pretend these systems are anything close to right…or intelligent…or human or whatever it’d be much better if we started thinking about them for what they are.

  • Denjin@feddit.uk
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    4 days ago

    Our analysis draws on computational theory and Godel’s First Incompleteness Theorem

    You don’t need any fancy analysis to understand the basic principals of LLMs as a sorting and prediction algorithm that works on large datasets and how they will always produce incorrect results to queries.

    They are not intelligent, they don’t understand or interpret any of the information in their datasets they just guess what you might want to see based on what appears similar.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 days ago

      You don’t need any fancy analysis to understand the basic principals of LLMs

      That’s true. But the problem is that grifters are pushing the complete opposite of a basic understanding. It’s nonstop disinformation and people are literally buying the hype. In these situations I think that formal analysis and proof can be necessary as a solid foundation for critiques of the grifting.

  • zeca@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    9
    ·
    4 days ago

    Our machines will never be both useful and absolutely truthful. One excludes the other.

      • zeca@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        19 hours ago

        by truthful, i meant generating truthful new knowledge, not just performing calculations that we implemented and know well. I agree that i could have phrased this better…

        • aesthelete@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 hours ago

          It’s amazing that you managed to try to pretend this thing will do what it cannot do.

          AI in general? Sure, maybe at some point.

          LLMs? Nope. Sorry. They’re basically an echo of sorts.

          (As, you know, the study you’re posting under is showing.)