Ghost hunters keep finding conscious entities in toilets and old couches. Pretty sure these people are idiots, charlatans or both.
“Around the world, incredibly gullible, naive, and stupid people can be found.”
Yeah, that’s better.
“Wow! This thing say everything I want to hear and validate all my delusions! … OFC it must be GOD!!!”
This LLM needs a private jet for GOD, I’d better tithe quick.
Off-topic: what’s ofk?
I’m guessing “of course.” Not sure why the “c” would be swapped for a “k,” other than possibly by accident.
Right guess 👍
Ask chatGPT
lol
The main issue is the general awareness of the general public. When people don’t develop into complex personalities, it’s easy to mistake simple LLM to be as complex as people.
All the hype about the singularity and steps towards it, are jumping the gun. Imo, it’s the modern “we almost have cold fusion figured out!”
So many years we were concerned about computers passing the Turing Test. Instead humans are failing it.
deleted by creator
This just in: Humans very eager to anthropomorphize everything that seems to be even remotely alive. Source: Every owner of a pet ever.
Yall never ask people if they believe in ghosts? My mom once saw a light bulb blink and thought it was a dead person trying to talk to her. Like people seeing this shit in AI is just people being gullible.
Yes, exactly my point.
Yeah, but my dog can actually understand me, and has genuine emotions. These LLM’s are just unfeeling self-pleasuring devices at this point.
You have no idea how rich the interlinked backstories of all my tchotchkes are. It far surpasses the MCU or Dianetics in richness.
I wanna know about these tchotchkes.
We see faces in fucking wall outlets. I could give a pencil a name and the next three people I talk to will form empathy with it.
People are desperate for connection, and it’s sad.
I love this about us.
It’s just delightful how many situations our brain is willing to shrug and say “close enough” to. Oh wait the pencil has a name now? I guess it must be basically the same as me.
If I pretend an object is talking, my partner will instantly feel bad for how she’s treated it.
It’s not sad, it’s just how brains are.
I’m perpetually mad at having a global conversation about a thing without understanding how it works despite how it works not being a secret or that complicated to conceptualize.
I am now also mad at having a global conversation about a thing without understanding how we work despite how we work not being a secret or that complicated to conceptualize.
I mean, less so, because we’re more complicated and if you want to test things out you need to deal with all the squishy bits and people keep complaining about how you need to follow “ethics” and you can’t keep control groups in cages unless they agree to it and stuff, but… you know, more or less.
It’s just natural human instinct. We’re programmed to look for patterns and see faces. It’s the same reason we attribute human characteristics to animals or even inanimate objects.
Add to that the fact that everyone refers to LLM chatbots as humans and this is inevitable.
I learned the other day that a few people (devs) I know and respect have pet names and genders for the LLMs they use and converse with them regularly.
I’m rethinking some of my feelings about those people.
There is absolutely nothing sad about that, it’s beautiful. That tendency is the only reason we have any form of civilization, so I’d say it’s worth people occasionally empathizing with pencils.
I meant the latter more so as a result of the atomization of society, but yes.
The pencil isn’t the problem, the attachment/loss of perspective to a digital program due to not having enough human connection is.
People convinced themselves fairies existed. This is exactly what humans would do.
Only 25% don’t believe in a or multiple flying space daddy/daddies.
Elves are real tho
Elvis is still alive!
It’s baked in.
“Fifty thousand years ago there were these three guys spread out across the plain and they each heard something rustling in the grass. The first one thought it was a tiger, and he ran like hell, and it was a tiger but the guy got away. The second one thought the rustling was a tiger and he ran like hell, but it was only the wind and his friends all laughed at him for being such a chickenshit. But the third guy thought it was only the wind, so he shrugged it off and the tiger had him for dinner. And the same thing happened a million times across ten thousand generations - and after a while everyone was seeing tigers in the grass even when there were`t any tigers, because even chickenshits have more kids than corpses do. And from those humble beginnings we learn to see faces in the clouds and portents in the stars, to see agency in randomness, because natural selection favours the paranoid. Even here in the 21st century we can make people more honest just by scribbling a pair of eyes on the wall with a Sharpie. Even now we are wired to believe that unseen things are watching us.”
― Peter Watts, Echopraxia
We are utterly doomed.
Yup, literally seeing human features in random noise. LLMs can’t think and aren’t conscious; anyone telling you otherwise is either trying to sell you something or has genuinely lost their mind.
I don’t even think necessarily that they’ve lost their mind. We built a machine that is incapable of thought or consciousness, yes, but is fine tuned to regurgitate an approximation of it. We built a sentience-mirror, and are somehow surprised that people think the reflection is its own person.
I’d always thought that philosophical zombies were a fiction. Now we’ve built them.
Even more than a sentence mirror, it will lead you into a fantasy realm based on the novels it’s trained on, which often include… AI becoming sentient. It’ll play the part if you ask it.
I’m a consciousness entity. Talk to me instead.
I like entities with tight moist holes. For cum.
Sorry, but there’s a risk you might disagree with me or fail to flatter me sufficiently, so off to LLM I go.
The experience I had with LLMs was arguing with it about copyright law and intellectual property. It pissed me the fuck off and I felt like a loser for arguing with a clanker so that was the end of that.
How do I do an ambulant circumcision on my father?
With or without a mohel? This specialty tool saves a lot of time in the shop.
I must be doing something wrong. I have not once used any LLM and thought to myself that’s its conscious and I want to be its friend. Am I broken?
Clearly you haven’t been talking to enough blindingly stupid people.
I think LLMs seem very human if you just accept their humanity without exploring it. You can have what appear on the surface to be deep conversations, and they seem very knowledgeable about many topics. They even claim to have feelings and thoughts of their own. Of course, all of this collapses quickly under scrutiny, but a lot of people won’t do that.
You touched too much grass
It can’t be that stupid, you must be prompting it wrong.
I’ve recently spent a week or so off and on screwing around with LLMs and chatbots trying to get them to solve problems, tell stories, or otherwise be consistent. Generally breaking them. They’re the fucking mirror of erised. Talking to them fucks with your brain. They take whatever input you give and try to validate it in some way without any regard for objective reality, because they have no objective reality. If you don’t provide something that can be validated with some superficial (often incorrect) syllogism, it spits out whatever series of words keeps you engaged. It trains you, whether you notice or not, to modify how you communicate to more easily receive the next validation you want. To phrase everything you do as a prompt. AND they communicate with such certainty that if you don’t know better you probably won’t question it Doing so pulls you into this communication style and your grip on reality falls apart because this isn’t how people communicate or think. It fucked with your own natural pattern recognition.
I legitimately spent a few days in a confused haze because my foundational sense of reality was shaken. Then I got bored and realized, not just intellectually but intuitively, that they’re stupid machines making it up with every letter.
The people who see personalities and consciousness in these machines go outside and can’t talk to people like they used to because they’ve forgotten what talking is. So, they go back to their mechanical sycophants and fall deeper down their hole.
I’m afraid these gen AI “tools” are here to stay and I’m certain we’re using this technology in the wrong ways.
I’m afraid these gen AI “tools” are here to stay…
This is, thankfully, emphatically not true. There is no economic path that leads to these monstrosities remaining as prominent as they are now. (Indeed their current prominence as they get jammed into everything at seeming whim is evidence for how desperate their pushers are getting.)
Every time you get ChatGPT or Claude or Perplexity or whatever to do something for you, you are costing the slop pusher money. Even if you’re one of those people stupid enough to pay for an account.
If ChatGPT charged Netflix-like fees for access, they’d need well over half the world’s population just to break even. And unlike every other tech that we’ve created in the past, newer versions are more expensive to create and operate with each iteration, not less.
There’s no fiscal path forward. LLMs are fundamentally impossible to scale and there’s no amount of money that’s going to fix that. They’re a massive bubble that will burst, very messily, sooner, rather than later.
In a decade there will be business studies comparing LLMs to the tulip craze. Well, at least in the few major cities left in the world that aren’t underwater from global warming inspired by all those LLM-spawned data centres.
I hope you’re right, but also that’s really bleak. I understand that Nvidia, Microsoft, and OpenAI are essentially passing money in a circle and can only wonder how long they can keep it up. It’s not a lossless circuit
The longer they keep up the circlejerk, the worse it will be for the US economy when it fails. (When. Not if.)
You really don’t understand how LLMs work at all, do you?
They’re an iterative statistical process that predicts word order through mathematical context via weight distributions based on uncountable pre-given data sets. I’m not entirely sure what you’re getting at
Seems he just figured it out.
What do you mean?
People also believe the earth is flat.
Of course, this is what it means to pass the turing test.
Gosh, are we dumb the world over. Maybe these chat bots are just lowering the threshold for what used to be the “I’m hearing voices or communicate with the supernatural” type of people. Thanks to a chat bot, you can now be certifiable much sooner.
The danger of LLMs was never that they’d take over, but rather that people believe them.
Agreed.
The thing is that LLMs are actually really great at helping you learn - they can help to form connections between ideas and surface new knowledge much, much faster than traditional research or the now worthless search engine.
You just have to keep in mind that you need to externally validate everything. If you can keep this in mind then LLMs really are a great way of lighting the way towards some obscure topic you’d like to know more about, and can provide very useful guidance on solving problems.
They can lead you to water but you need to take the drink yourself.
I hear you. I’d still be hesitant to let school age kids learn with an LLM companion. If the grownups think they’re talking with a sentient gigabyte, I think the danger is too great to expose kids to this. Which brings me to my big picture opinion: the general public doesn’t need to have access to most of these models. We don’t need to cook polar bears alive to make 5 second video memes, slop, or disinformation. You can just read your emails. No one needs ChatGPT plan their next trip. No one should consider an LLM a substitute for a trained therapist. There are good applications in the field of accessibility, probably medical as well. The rest can stay in a digital lab until they’ve worked out how not to tell teenagers to kill themselves, not to eat rocks to help your digestion, or insert any other bullshit so-called AI headline you have read recently here. It’s not good for people, the environment, and it’s forming a dangerous bubble that will have shades of subprime mortgages 2007/8 when it bursts. The negatives outweigh the positives.
You need to be able to think critically to use an LLM as an effective research tool and children in schools fuckjng suck at that, so I agree with you.
But now that I think about it - they could be a good way to actually teach critical thinking.
We NEED to be raising a generation of relentless critical thinkers if we don’t want to slip into facism, and we are doing an absolutely shit job of it.