“Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly,” the researcher wrote.
While there are “established methods for quantifying uncertainty,” AI models could end up requiring “significantly more computation than today’s approach,” he argued, “as they must evaluate multiple possible responses and estimate confidence levels.”
“For a system processing millions of queries daily, this translates to dramatically higher operational costs,” Xing wrote.
They already require substantially more computation than search engines.
They already cost substantially more than search engines.
Their hallucinations make them unusable for any application beyond novelty.
If removing hallucinations means Joe Shmoe isn’t interested in asking it questions a search engine could already answer, but it brings even 1% of the capability promised by all the hype, they would finally actually have a product. The good long-term business move is absolutely to remove hallucinations and add uncertainty. Let’s see if any of then actually do it.
They probably would if they could. But removing hallucinations would remove the entire AI. The AI is not capable of anything other than hallucinations that are sometimes correct. They also can’t give confidence, because that would be hallucinated too.
If removing hallucinations means Joe Shmoe isn’t interested in asking it questions a search engine could already answer, but it brings even 1% of the capability promised by all the hype, they would finally actually have a product. The good long-term business move is absolutely to remove hallucinations and add uncertainty. Let’s see if any of then actually do it.
Users love getting confidently wrong answers
They probably would if they could. But removing hallucinations would remove the entire AI. The AI is not capable of anything other than hallucinations that are sometimes correct. They also can’t give confidence, because that would be hallucinated too.