• TeddE@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    9 hours ago

    Just as a tangent:

    This is one reason why I’ll never trust AI.

    I imagine we might wrangle the hallucination thing (or at least be more verbose about it’s uncertainty), but I doubt it will ever identify a poorly chosen question.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      9 hours ago

      Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.

      Solving the hallucinations in LLMs is impossible.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?