Idk, there’s always that argument how technology is neutral. But is it? I mean tech isn’t separate from the world but embedded into a context. People use it, so I’d like to make an argument that dystopian surveillance tech, all the stuff that fuels the attention economy and industrialized warfare machinery are something alike evil. And AI, well that’s designed to reproduce stereotypes and bias. It’s almost entirely controlled by tech bros. And designed to their specifications, so it’s kinda laid out to do what they want, so there’s intent baked in in a form. And we have the environment footprint so it’d really need to perform better or it’s a net-negative outcome, no matter what philosophical arguments we have.
It’s a category error, LLMs are text prediction engines. There is nothing behind the curtain, they can’t by evil, because that implies understanding and intent.
LLMs are evil in the way that earthquakes are evil, it is pure anthropomorphism, and it’s taking the focus from were the real issues are.
Don’t get sucked into blaming the hammer, when the one swinging it it right there.
Yeah, I’m not entirely convinced, yet. Sure you’re right and all. But even with the hammer analogy… There’s the one the carpenter uses. And then there’s the warhammer, specially designed to crush skulls. And you’re bound to have a bad time renovating your roof with that thing. So I think the designer already left some intent in the technology. And on the other side he have what things get used for. I think I’d be willing to attribute evilness to technology if it’s solely for evil purposes. Like certain kinds of land mines or devices to torture people.
Other than that, you’re certainly right. Most tech is at least dual-use. Or neutral and can be used for arbitrary good and evil tasks. I bet this is the case with AI, like other computer tech and automation. And with that it’s down to the humans who use it as a tool.
Question remains if that’s a useful argument in practice. When talking about dystopian science fiction, I’m always a bit unsure about the interplay of the people in power, the technology and society. Is it the people who use technology to oppress the other people? Or did the existence of the technology get them into the position of power in the first place, enabling the dystopian society?
And I’m sure we’ll give AI power to make decisions. We already let algorithms shape our information, lives and society. And oftentimes not for the better. And in my eyes it doesn’t really tell us a lot if we say the computer code has no understanding or intent. It’s still going to affect our lives, and it can have agency or autonomy if provided with the power to act in some form. It doesn’t need intent in the sense of a conscious human being for that. (But that’s not exclusive to AI. A more traditional business process might also decline someone’s loan or medical treatment and ruin their life. Or approve the military bomb someone.)
Yeah this is a sore point. Whenever management says, “the company decided…” I really want to stop them and scream, “Who?! Who in the company decided?!”
I can’t agree, the LLM’s don’t have the capacity to be evil. They may be called AI but there is no “I” anywhere in there.
The companies however…well that is a different story.
Those logos are just as closely related to the companies as to the code
They’re actually corporate logos, so … yeah. By definition.
Idk, there’s always that argument how technology is neutral. But is it? I mean tech isn’t separate from the world but embedded into a context. People use it, so I’d like to make an argument that dystopian surveillance tech, all the stuff that fuels the attention economy and industrialized warfare machinery are something alike evil. And AI, well that’s designed to reproduce stereotypes and bias. It’s almost entirely controlled by tech bros. And designed to their specifications, so it’s kinda laid out to do what they want, so there’s intent baked in in a form. And we have the environment footprint so it’d really need to perform better or it’s a net-negative outcome, no matter what philosophical arguments we have.
I don’t disagree, but my point is.
It’s a category error, LLMs are text prediction engines. There is nothing behind the curtain, they can’t by evil, because that implies understanding and intent.
LLMs are evil in the way that earthquakes are evil, it is pure anthropomorphism, and it’s taking the focus from were the real issues are.
Don’t get sucked into blaming the hammer, when the one swinging it it right there.
Yeah, I’m not entirely convinced, yet. Sure you’re right and all. But even with the hammer analogy… There’s the one the carpenter uses. And then there’s the warhammer, specially designed to crush skulls. And you’re bound to have a bad time renovating your roof with that thing. So I think the designer already left some intent in the technology. And on the other side he have what things get used for. I think I’d be willing to attribute evilness to technology if it’s solely for evil purposes. Like certain kinds of land mines or devices to torture people.
Other than that, you’re certainly right. Most tech is at least dual-use. Or neutral and can be used for arbitrary good and evil tasks. I bet this is the case with AI, like other computer tech and automation. And with that it’s down to the humans who use it as a tool.
Question remains if that’s a useful argument in practice. When talking about dystopian science fiction, I’m always a bit unsure about the interplay of the people in power, the technology and society. Is it the people who use technology to oppress the other people? Or did the existence of the technology get them into the position of power in the first place, enabling the dystopian society?
And I’m sure we’ll give AI power to make decisions. We already let algorithms shape our information, lives and society. And oftentimes not for the better. And in my eyes it doesn’t really tell us a lot if we say the computer code has no understanding or intent. It’s still going to affect our lives, and it can have agency or autonomy if provided with the power to act in some form. It doesn’t need intent in the sense of a conscious human being for that. (But that’s not exclusive to AI. A more traditional business process might also decline someone’s loan or medical treatment and ruin their life. Or approve the military bomb someone.)
Yeah this is a sore point. Whenever management says, “the company decided…” I really want to stop them and scream, “Who?! Who in the company decided?!”
These 9 companies have made billions by convincing thousands of other companies that their fun text generator can replace skilled workers.