• marcos@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    8 hours ago

    Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.

    Solving the hallucinations in LLMs is impossible.

    • Leon@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?