LLM’s are cool and all, but you can’t use them for anything requiring real precision without allocating human work time to validate the output, unless you want to end up on the national news for producing something fraudulent.
And making it so their image generator can generate porn isn’t going to change that.
I bitched out Baidu’s LLMbecile because Baidu has lost all capacity for searching in favour of the slop. It literally told me that Baidu was useless for search and recommended several of its competitors over Baidu.
Yes, currently AI isn’t reliable enough to use instead of a human. All the big AI businesses bet that this will change - either by training with more data or some technological breakthrough.
They tried that with Theranos because Elizabeth Holmes’ machine could correctly identify four viruses.
Presumably LLM’s have already trained on the entirety of human knowledge and communication and still produce buggy information, so I’m skeptical that it’ll work out the way the VC’s expect, but we’ll see.
LLM’s are cool and all, but you can’t use them for anything requiring real precision without allocating human work time to validate the output, unless you want to end up on the national news for producing something fraudulent.
And making it so their image generator can generate porn isn’t going to change that.
I had to correct my boss this morning because they didn’t read the AI output that told our client our services were worthless.
I bitched out Baidu’s LLMbecile because Baidu has lost all capacity for searching in favour of the slop. It literally told me that Baidu was useless for search and recommended several of its competitors over Baidu.
Oopsie!
So sad that I totally believe that.
Yes, currently AI isn’t reliable enough to use instead of a human. All the big AI businesses bet that this will change - either by training with more data or some technological breakthrough.
Could be they’re right.
They tried that with Theranos because Elizabeth Holmes’ machine could correctly identify four viruses.
Presumably LLM’s have already trained on the entirety of human knowledge and communication and still produce buggy information, so I’m skeptical that it’ll work out the way the VC’s expect, but we’ll see.
Thisthread is not about AI, it’s about pattern predictions snake oil.