Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.
That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?
Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.
Solving the hallucinations in LLMs is impossible.
That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?