Everything about the llm industry is suspicious and predatory. They’re doing something that is not only unpopular, it’s not just going to put people out of work. People are doing to die. And not in the vague economic sense of “40k extra deaths per point drop in the stock market” sense. I mean in the direct “AI directly caused these deaths and there’s no other way to look at it” sense. I’m not sure how, when, or why. But I am sure that the people pushing LLMs the way they are do. And they’re trying to establish a legal protection for when that happens.
It’s already driven multiple people to panic attacks, one woman to committing herself out of fear of a psychotic break, and several people to suicide and at least one murder.
I promise I’m not chronically online. LLMs are the most current end result of a predatory VC industry that has not let ethics slow it down in the past. Why would ethics slow them down now?
I could go on about how AI hallucinating in medical notes taken by copilot could lead to improper prescriptions, or how AI hallucinating could do things like tell people to put glue on pizza.
I mean, I guess some outcomes are just injury not death so…
Every member of staff at the Department of War has been told to start relying on AI. A system that consistently hallucinates. Saying it’s going to get people killed is not hyperbole, as much as I wish it was. :/
Everything about the llm industry is suspicious and predatory. They’re doing something that is not only unpopular, it’s not just going to put people out of work. People are doing to die. And not in the vague economic sense of “40k extra deaths per point drop in the stock market” sense. I mean in the direct “AI directly caused these deaths and there’s no other way to look at it” sense. I’m not sure how, when, or why. But I am sure that the people pushing LLMs the way they are do. And they’re trying to establish a legal protection for when that happens.
It’s already driven multiple people to panic attacks, one woman to committing herself out of fear of a psychotic break, and several people to suicide and at least one murder.
Friend, please get offline and go outside, talk with other humans face to face, touch some grass. Jesus Christ
Listen. I know. I know how it sounds.
I promise I’m not chronically online. LLMs are the most current end result of a predatory VC industry that has not let ethics slow it down in the past. Why would ethics slow them down now?
Bro they’re just pocketing investor cash by hyping the latest tech snakeoil on a level never before seen.
Were NFTs going to kill people too? 🙄
I don’t think the OP is saying we’ll have LLM Terminators walking around saying, “I will be back!” while mowing down people.
But deaths attributed to LLMs have already started:
Sure, this seems to be just playing on existing mentally ill people, so let’s ignore them for now.
What about “AI” pointing the gun and having someone else pull the trigger?
What happens when a trigger happy cop gets a dangerous sounding alert and decides to go in guns blazing?
I don’t believe for a minute that the military isn’t looking at weaponizing this detection by also having AI “decide” if it should attack.
Then there’s the AI in robo taxis - surely those are 100% accurate. Well they had better be - car deaths are the the highest cause of accidental deaths in the US.
I could go on about how AI hallucinating in medical notes taken by copilot could lead to improper prescriptions, or how AI hallucinating could do things like tell people to put glue on pizza.
I mean, I guess some outcomes are just injury not death so…
https://lemmy.world/post/40041447
Every member of staff at the Department of War has been told to start relying on AI. A system that consistently hallucinates. Saying it’s going to get people killed is not hyperbole, as much as I wish it was. :/