It’s an AI problem. We know people are stupid. However, people selling AI garbage tell them it’s intelligent, when it really isn’t. It is trained to speak confidently and people believe it. It’s why con(fidence) men work.
The people pushing these products know some people won’t understand it, and they know they’ll take what it says at face value, and they fight to push this idea too. They are creating this situation on purpose. If they were responsible they’d be very forward with the limitations and try to ensure even the most gullible of people are skeptical of what it writes. They don’t even try to do this though. They create a situation where this happens to pad their own pockets.
That’s not remotely close to what I said. I said the companies have a responsibility to inform the idiot that it’s not always accurate. They’ll still be idiots, but they’ll be made aware not to trust the LLM’s response by default. They might complain about this map, but hopefully after they’re told the LLM was wrong they’d recognize that it was the LLM’s fault because the LLM tells them that it might make things up.
It doesn’t cure idiots, but it does make it harder for idiots to make the mistake of trusting your software. Instead, they push the image that their software is intelligent (“AI”), and constantly send the message that it is to be trusted.
It’s an AI problem. We know people are stupid. However, people selling AI garbage tell them it’s intelligent, when it really isn’t. It is trained to speak confidently and people believe it. It’s why con(fidence) men work.
The people pushing these products know some people won’t understand it, and they know they’ll take what it says at face value, and they fight to push this idea too. They are creating this situation on purpose. If they were responsible they’d be very forward with the limitations and try to ensure even the most gullible of people are skeptical of what it writes. They don’t even try to do this though. They create a situation where this happens to pad their own pockets.
So if there was no ai this idiot would not be an idiot to others? Not like that
That’s not remotely close to what I said. I said the companies have a responsibility to inform the idiot that it’s not always accurate. They’ll still be idiots, but they’ll be made aware not to trust the LLM’s response by default. They might complain about this map, but hopefully after they’re told the LLM was wrong they’d recognize that it was the LLM’s fault because the LLM tells them that it might make things up.
It doesn’t cure idiots, but it does make it harder for idiots to make the mistake of trusting your software. Instead, they push the image that their software is intelligent (“AI”), and constantly send the message that it is to be trusted.
You underestimate the ingenuity of idiots
No, I don’t. I just think companies should be responsible for the output of their products.