• Leon@pawb.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 hours ago

    That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?