I imagine we might wrangle the hallucination thing (or at least be more verbose about it’s uncertainty), but I doubt it will ever identify a poorly chosen question.
Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.
That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?
Btw, the correct answer is “use flexbox”.
Just as a tangent:
This is one reason why I’ll never trust AI.
I imagine we might wrangle the hallucination thing (or at least be more verbose about it’s uncertainty), but I doubt it will ever identify a poorly chosen question.
Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.
Solving the hallucinations in LLMs is impossible.
That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?
You could also use margin: 0 auto;
Where it works, yes. If you know where it works, it won’t be a problem for you.