• theunknownmuncher@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    Current AI models have been trained to give a response to the prompt regardless of confidence, causing the vast majority of hallucinations. By incorporating confidence into the training and responding with “I don’t know”, similar to training for refusals, you can mitigate hallucinations without negatively impacting the model.

    If you read the article, you’ll find the “destruction of ChatGPT” claim is actually nothing more than the “expert” making the assumption that users will just stop using AI if it starts occasionally telling users “I don’t know”, not any kind of technical limitation preventing hallucinations from being solved, in fact the “expert” is agreeing that hallucinations can be solved.

    You’ve done a lot of typing and speak very confidently, but ironically, you seem to have only a basic understanding of how an LLM works and how they are trained, and are just parroting talking points that aren’t really correct.