Cofeiini@sopuli.xyz to Fuck AI@lemmy.worldEnglish · 2 days agoAI "educator" is mad at people for stealing promptssopuli.xyzimagemessage-square153fedilinkarrow-up1855arrow-down15
arrow-up1850arrow-down1imageAI "educator" is mad at people for stealing promptssopuli.xyzCofeiini@sopuli.xyz to Fuck AI@lemmy.worldEnglish · 2 days agomessage-square153fedilink
minus-squareebc@lemmy.calinkfedilinkarrow-up21·17 hours ago“Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…
minus-squareTheReturnOfPEB@reddthat.comlinkfedilinkEnglisharrow-up20·17 hours ago“ChatGPT, please do not lie to me.” “I’m sorry Dave, I’m afraid I can’t do that.”
minus-squareflying_sheep@lemmy.mllinkfedilinkarrow-up5·edit-27 hours agoThat’s incorrect because in order to lie, one must know that they’re not saying the truth. LLMs don’t lie, they bullshit.
“Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…
“ChatGPT, please do not lie to me.”
“I’m sorry Dave, I’m afraid I can’t do that.”
That’s incorrect because in order to lie, one must know that they’re not saying the truth.
LLMs don’t lie, they bullshit.