- cross-posted to:
 - [email protected]
 
146
- cross-posted to:
 - [email protected]
 
LLMs Will Always Hallucinate, and We Need to Live With This
arxiv.orgAs Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. Our analysis draws on computational theory and Godel's First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process-from training data compilation to fact retrieval, intent classification, and text generation-will have a non-zero probability of producing hallucinations. This work introduces the concept of Structural Hallucination as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated.


Someone said that calling their misinfo hallucinations is actually genius. Because everything they say is the hallucination, even the things we read and think are correct. The whole thing hallucinates away and then we go and say ok some of this makes a lot of sense, but the rest…
So basically that’s why it will never be 100% correct all the time, because all of the output is just more or less correct hallucination.
The problem with “hallucinations” is that computers don’t hallucinate. It’s just more anthropomorphic grifter hype. So, while it sounds like a criticism of “AI”, it’s just reinforcing false narratives.
Human pattern recognition making the insane machine seem like it’s making sense. Astrology but with venture capital backing. I like it.
This is completely correct, it does the exact same thing when it works as people expect as it does when it’s “hallucinating”.
It’s been a while since I posted the LLMentalist.
That’s such a great article. It’s been one of the most effective things to share with people who are intrigued by LLMs but are usually sensible people
Thanks that was a cool read. I pretty much fully agree.