• ebc@lemmy.ca
    link
    fedilink
    arrow-up
    21
    ·
    17 hours ago

    “Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…

      • flying_sheep@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        7 hours ago

        That’s incorrect because in order to lie, one must know that they’re not saying the truth.

        LLMs don’t lie, they bullshit.