The thing in the right is also a glorified prediction engine. I suppose whoever made this is steeped in religious dogma but humans aren’t that advanced either. We just predict things.
Inb4 the advanced fat-based brains brigade me using their advanced fat-based prediction engines 🙄
It’s still leagues ahead of LLMs. I’m not saying it’s entirely impossible to build a computer that surpasses the human brain in actual thinking. But LLMs ain’t it.
The feature set of the human brain is different, in a way that you can’t compensate for by just increasing scale. So you get something that works but not quite, by using several orders of magnitude more power.
We optimize and learn constantly. We have chunking, whereby a complex idea becomes simpler for our brain once it’s been processed a few times, and this allows us to progressively work on more and more complex ideas without an increase in our working memory. And a lot of other stuff.
If you spend enough time using LLMs you must notice how their working is different from your own.
I think the moat is that when a human is born and their world model starts “training”, it’s already pre-trained by millions of years of evolution. Instead of starting from random weights like any artificial neural network, it starts with usable stuff, lessons from scenarios it may never encounter but will nevertheless gain wisdom from.
I don’t spend time working with LLMs. I’d agree we have additional features. For example I think while the computers currently can guess, we can guess and check in a meaningful way. But that’s not what the meme was about. I would argue the meme was barely about anything other than “ai bad, me smort”. Ironic since the LLM could probably make a better one even if it “doesn’t understand”, whatever understand is.
Sorry, but I’m not a prediction engine, I am capable of abstract thought, and actually understanding the meaning of the words.
I can also process all kinds of different data and make connection between then which includes emotional connections.
Another cool trick, I also have this thing called a consciousness which is something I can’t explain or put into words but I know it exists. All under 20W.
So you have something you don’t understand and can’t prove exists. Like a hallucination?
Tbh the rest isn’t worth responding to. Emotional connections? Come on, you’re a horny bag of chemical soup. None of this is real. Humans mostly guess what reality is anyway.
The thing in the right is also a glorified prediction engine. I suppose whoever made this is steeped in religious dogma but humans aren’t that advanced either. We just predict things.
Inb4 the advanced fat-based brains brigade me using their advanced fat-based prediction engines 🙄
It’s still leagues ahead of LLMs. I’m not saying it’s entirely impossible to build a computer that surpasses the human brain in actual thinking. But LLMs ain’t it.
The feature set of the human brain is different, in a way that you can’t compensate for by just increasing scale. So you get something that works but not quite, by using several orders of magnitude more power.
We optimize and learn constantly. We have chunking, whereby a complex idea becomes simpler for our brain once it’s been processed a few times, and this allows us to progressively work on more and more complex ideas without an increase in our working memory. And a lot of other stuff.
If you spend enough time using LLMs you must notice how their working is different from your own.
I think the moat is that when a human is born and their world model starts “training”, it’s already pre-trained by millions of years of evolution. Instead of starting from random weights like any artificial neural network, it starts with usable stuff, lessons from scenarios it may never encounter but will nevertheless gain wisdom from.
I don’t spend time working with LLMs. I’d agree we have additional features. For example I think while the computers currently can guess, we can guess and check in a meaningful way. But that’s not what the meme was about. I would argue the meme was barely about anything other than “ai bad, me smort”. Ironic since the LLM could probably make a better one even if it “doesn’t understand”, whatever understand is.
Do you not have an internal experience?
Of course I do, that doesn’t mean we understand it.
I don’t need to understand consciousness to be confident a llm is not conscious.
Dogs are glorified barking machines. Is a tape playing a tape of a dog barking have the consciousness or intellegence of a dog?
Are dogs conscious? What about mites?
Probably, their interactions with humans/dogs suggest they have a “theory of mind”.
Mites? No.
You can’t prove that I do, I can’t prove that you do. Those metaphysical arguments don’t have much punch in a scientific conversation.
Sorry, but I’m not a prediction engine, I am capable of abstract thought, and actually understanding the meaning of the words.
I can also process all kinds of different data and make connection between then which includes emotional connections.
Another cool trick, I also have this thing called a consciousness which is something I can’t explain or put into words but I know it exists. All under 20W.
Maybe you’d be able to if you dial it to 25W
So you have something you don’t understand and can’t prove exists. Like a hallucination?
Tbh the rest isn’t worth responding to. Emotional connections? Come on, you’re a horny bag of chemical soup. None of this is real. Humans mostly guess what reality is anyway.
Nonetheless, the human brain is a better prediction engine.
Yes, we have better glorified prediction engines in general