The distinction is important because an intelligent being can actually be trusted to do the things they are trained to do. LLMs can’t. The “hallucination” problem comes from these things just being probability engines, they don’t actually know what they’re saying. B follows A, they don’t know why and they don’t care and they can’t care. That’s why LLMs are not actually able to really replace workers, at best they’re productivity software that can (maybe, I’m not convinced) make human workers more productive.
One distinction is that it requires work to actually get real useful knowledge out of these things. You can’t just prompt it and then always expect the answer to be correct or useful, you have to double check everything because it might just make shit up.
The knowledgeably, the intelligence, still comes from the human user.
To be fair all of what you’ve said applies to humans too. Look how many flat earthers there are and even more people that believe in homeopathy or think that vaccines give you autism, think that aliens built the pyramids.
But nobody calls that “hallucinations” in humans. Are LLMs perfect? Definitely not. Are they useful? Somewhat; but definitely extremely far from PhD level intelligence as some claim.
But there are things LLMs are why better than any single human already (not collectively). Giving you a hint (doesn’t have to be 100% accurate) what topics to look up if you can just describe is vaguely but don’t know what you would even search for in a traditional search engine.
Of course you can not trust it blindly, but you shouldn’t trust humans blindly either, that’s we we have the scientific method, because humans are unreliable too.
My boss trusts me to be able to accurately check the quality of parts I weld. That’s literally my job. It’s not blind trust, I’d lose my job if I couldn’t consistently produce good results, but once I was trained to do my job I can be left to my own devices. You can’t do that with an LLM because you’d still need a human to double-check to make sure it didn’t hallucinate that the part was correctly welded.
That’s the difference - intelligent beings, once they understand something, can be trusted. Obviously if they don’t understand something, like the fact that the Earth is round, then you can’t trust them. Intelligence still requires education and training, but the difference is that educating and training intelligent beings actually produces consistent results that can be relied on.
Notably, “hallucinate” is actually a term invented by the companies behind LLMs. It’s not really accurate, because hallucination still implies intelligence. They’re just pattern recognition engines, they don’t “hallucinate” they just don’t have any idea what the patterns mean or why they happen. B follows A, that’s all it knows. If a sequence occurs where C follows A it makes a mistake and we call that “hallucination” even though it’s really just a mindless machine churning thoughtlessly repeating the patterns it was trained on.
Your boss expects you to weld with good quality but they don’t expect you to answer every question there is, without any mistakes. The problem with LLMs is that they are trained purely on text found on the internet but they have no “life experience” and thus there world model is very different from ours. There are overlaps (that’s why they can produce any coherent output at all) but there are situations that make perfect sense in its world model, that’s complete bogus in the real world.
It’s a bit like the shadows in Platons cave allegory. LLMs are practically trained only on the shadows and so the output is completely based on that shadow world. LLMs can describe pain (because it was in the training data) but it was never smacked in the face.
That’s exactly why we can’t really call them intelligent or knowledgeable. They’re pattern recognition engines, they mindlessly recognize and repeat patters even when they don’t make any sense i.e. “hallucinate”
They’re a productivity tool that can help actually intelligent and knowledgeable beings like humans do tasks, but on their own they are a parking lot covered with shredded dictionaries. If we use the Chinese room analogy, it’d be like trying to build a Chinese room with just the translation dictionary and without the human to do the translating.
Which is why LLMs make mistakes when translating too - they need a human, a real intelligence, to check.
Humans are also “pattern recognition engines”. That’s why optical illusions and similar completely mess with our brains. There are patterns that we perceive as moving/rotating even though the pattern is completely stationary.
But nobody would claim that you can’t trust your eyes in general just because optical illusions exist.
The distinction is important because an intelligent being can actually be trusted to do the things they are trained to do. LLMs can’t. The “hallucination” problem comes from these things just being probability engines, they don’t actually know what they’re saying. B follows A, they don’t know why and they don’t care and they can’t care. That’s why LLMs are not actually able to really replace workers, at best they’re productivity software that can (maybe, I’m not convinced) make human workers more productive.
One distinction is that it requires work to actually get real useful knowledge out of these things. You can’t just prompt it and then always expect the answer to be correct or useful, you have to double check everything because it might just make shit up.
The knowledgeably, the intelligence, still comes from the human user.
To be fair all of what you’ve said applies to humans too. Look how many flat earthers there are and even more people that believe in homeopathy or think that vaccines give you autism, think that aliens built the pyramids.
But nobody calls that “hallucinations” in humans. Are LLMs perfect? Definitely not. Are they useful? Somewhat; but definitely extremely far from PhD level intelligence as some claim.
But there are things LLMs are why better than any single human already (not collectively). Giving you a hint (doesn’t have to be 100% accurate) what topics to look up if you can just describe is vaguely but don’t know what you would even search for in a traditional search engine.
Of course you can not trust it blindly, but you shouldn’t trust humans blindly either, that’s we we have the scientific method, because humans are unreliable too.
My boss trusts me to be able to accurately check the quality of parts I weld. That’s literally my job. It’s not blind trust, I’d lose my job if I couldn’t consistently produce good results, but once I was trained to do my job I can be left to my own devices. You can’t do that with an LLM because you’d still need a human to double-check to make sure it didn’t hallucinate that the part was correctly welded.
That’s the difference - intelligent beings, once they understand something, can be trusted. Obviously if they don’t understand something, like the fact that the Earth is round, then you can’t trust them. Intelligence still requires education and training, but the difference is that educating and training intelligent beings actually produces consistent results that can be relied on.
Notably, “hallucinate” is actually a term invented by the companies behind LLMs. It’s not really accurate, because hallucination still implies intelligence. They’re just pattern recognition engines, they don’t “hallucinate” they just don’t have any idea what the patterns mean or why they happen. B follows A, that’s all it knows. If a sequence occurs where C follows A it makes a mistake and we call that “hallucination” even though it’s really just a mindless machine churning thoughtlessly repeating the patterns it was trained on.
Your boss expects you to weld with good quality but they don’t expect you to answer every question there is, without any mistakes. The problem with LLMs is that they are trained purely on text found on the internet but they have no “life experience” and thus there world model is very different from ours. There are overlaps (that’s why they can produce any coherent output at all) but there are situations that make perfect sense in its world model, that’s complete bogus in the real world.
It’s a bit like the shadows in Platons cave allegory. LLMs are practically trained only on the shadows and so the output is completely based on that shadow world. LLMs can describe pain (because it was in the training data) but it was never smacked in the face.
That’s exactly why we can’t really call them intelligent or knowledgeable. They’re pattern recognition engines, they mindlessly recognize and repeat patters even when they don’t make any sense i.e. “hallucinate”
They’re a productivity tool that can help actually intelligent and knowledgeable beings like humans do tasks, but on their own they are a parking lot covered with shredded dictionaries. If we use the Chinese room analogy, it’d be like trying to build a Chinese room with just the translation dictionary and without the human to do the translating.
Which is why LLMs make mistakes when translating too - they need a human, a real intelligence, to check.
Humans are also “pattern recognition engines”. That’s why optical illusions and similar completely mess with our brains. There are patterns that we perceive as moving/rotating even though the pattern is completely stationary.
But nobody would claim that you can’t trust your eyes in general just because optical illusions exist.
We can tell optical illusions are fake specifically because we aren’t just pattern recognition engines.
LLMs “hallucinate” because they can’t do that. To them, the optical illusion is reality.
That’s the difference between intelligence and knowledgeability, instead of merely containing knowledge.