- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasnāt changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence ā based on the data itās been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance ā nothing more, and nothing less.
So why is a real āthinkingā AI likely impossible? Because itās bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesnāt hunger, desire or fear. And because there is no cognition ā not a shred ā thereās a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the āhard problem of consciousnessā. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to āhappenā, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
I disagree with this notion. I think itās dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts: https://ai-2027.com/
Yeah, they probably wouldnāt think like humans or animals, but in some sense could be considered āconsciousā (which isnāt well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.
This argument seems weak to me:
You can emulate inputs and simplified versions of hormone systems. āReasoningā models can kind of be thought of as cognition; though temporary or limited by context as itās currently done.
Iām not in the camp where I think itās impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. Iām not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a āsingularity-likeā scenario.
I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.
You donāt think thatās already happening considering how Sam Altman and Peter Thiel have ties?
I do, but was thinking 1984-levels of control of reality.
Ask AI:
Did you mean: irresponsible AI Overview The term āunresponsibleā is not a standard English word. The correct word to use when describing someone who does not take responsibility is irresponsible.