Current LLMs are definitely not intelligent, but predicting the future is a big part (if not the most important part) of intelligence.
Your comment is a bit like saying that humans can’t be intelligent, because the biochemistry in our brains is just laws of physics in motion, and the laws of physics are not intelligent.
Intelligent is an emergent property. You can definitely be intelligent even if every component is not.
But with LLMs we found a new weird “dimension” that something can be very knowledgeable without being intelligent. Even current LLMs have more general knowledge than all humans but they lack actual intelligence.
it doesn’t predict, it follows a weighted graph or the equivalent. it doesn’t guess, /dev/urandom input makes the path unpredictable. any case where it looks like it predicts or guesses is purely accidental, and all in the eye of the observer.
further, it only posses knowledge to tthe degree that an encyclopedia does. the prompt is just the equivalent of a hash key pulling a bucket out of a map.
it is literally just a huge database of key-value pairs stored so as to minimize the description length of the values.
The training process evolves models to do predictions. The actual underlying mechanisms are not too relevant because the prediction function is an emergent property.
You brain is just biochemistry and biochemistry isn’t intelligent and yet you are. Think of the number three and all you know about it. There is not a single neuron in your brain that has any idea what the concept of three even means. It’s an emergent behavior.
there’s no emergent behavior in llm’s. your perception that there is, is an anthropomorphism, same as with the idea of prediction. statistically “predicting” the next word based on the frequency of input data isn’t an emergent property, it exists as a staic feature of the algorithm from the start. at a certain level of complexity, llms appear to produce comprehensible text, provided you stop them in time. that’s merely because of the rules of the algorithm. the illusion of intelligence comes merely from being able to select “merged buckets” from the map, which are put together mathematically.
it is a one trick pony that will never become anything else.
Let’s abstract if further. Rip every page out of the dictionary and put it through a shredder. All the knowledge is still there, the paper hasn’t been destroyed and the knowledge can be accessed by someone patient, it’s just not in a form that can be easily read.
Exactly? Both intelligence and knowledgeably are emergent, you can’t just have all the knowledge in one place and then call it knowledgeable (or intelligent, for that matter). A book (or a chatbot) isn’t knowledgeable, it merely contains knowledge.
I’m not a native speaker, but that sounds like semantics to me. How would you, when chatting, differentiate if the other end is “knowledgeable” or if it “merely contains knowledge”?
The distinction is important because an intelligent being can actually be trusted to do the things they are trained to do. LLMs can’t. The “hallucination” problem comes from these things just being probability engines, they don’t actually know what they’re saying. B follows A, they don’t know why and they don’t care and they can’t care. That’s why LLMs are not actually able to really replace workers, at best they’re productivity software that can (maybe, I’m not convinced) make human workers more productive.
One distinction is that it requires work to actually get real useful knowledge out of these things. You can’t just prompt it and then always expect the answer to be correct or useful, you have to double check everything because it might just make shit up.
The knowledgeably, the intelligence, still comes from the human user.
Are you trying to say that the word ‘knowledgeable’ has some implication of intelligence? Because, depending on context, yes it can. Or are you trying to say that LLMs take a lot of time and/or energy to reassemble their shredded data? To answer your question, yes, the pile of shredded paper contains knowledge, and its accessibility is irrelevant to the conversation.
Your exchange makes me think about the chinese room thought experiment.
The person inside the room has instructions and a dictonary they uses to translate chinese symbols into english words. They never leave the room and never interact with anyone. They just translate single words.
They don’t understand chinese, but the output of the system (the room) gives the impression that there is thinking behind the process. If I remember correctly, it was an argument against the Turing test. The claim was that computers could be extremely efficient into constructing anwsers that seems to be backed by human consciousness/thinking.
Right, so the parking lot covered with shredded dictionaries needs a human mind or else its just a bunch of trash.
The human inside the Chinese room, or in the parking lot picking up and organizing the trash, or in a discussion with a chatbot is still critical to the overall intelligence/knowledgeability of the system. It’s still needed for that spark and, without it, it’s just trash.
I think you are right. IMHO the room actually does speak/understand Chinese, even of the robot/human in the room does not.
There are no neurons in your brain that “understand” English, yet you do. Intelligence is an emergent property. If you “zoom-in” enough everything is just laws of physics and those laws don’t understand English or Chinese.
Anyone who thinks intelligence means guessing the right word next probably loves AI
Current LLMs are definitely not intelligent, but predicting the future is a big part (if not the most important part) of intelligence.
Your comment is a bit like saying that humans can’t be intelligent, because the biochemistry in our brains is just laws of physics in motion, and the laws of physics are not intelligent.
Intelligent is an emergent property. You can definitely be intelligent even if every component is not.
But with LLMs we found a new weird “dimension” that something can be very knowledgeable without being intelligent. Even current LLMs have more general knowledge than all humans but they lack actual intelligence.
it doesn’t predict, it follows a weighted graph or the equivalent. it doesn’t guess, /dev/urandom input makes the path unpredictable. any case where it looks like it predicts or guesses is purely accidental, and all in the eye of the observer.
further, it only posses knowledge to tthe degree that an encyclopedia does. the prompt is just the equivalent of a hash key pulling a bucket out of a map.
it is literally just a huge database of key-value pairs stored so as to minimize the description length of the values.
The training process evolves models to do predictions. The actual underlying mechanisms are not too relevant because the prediction function is an emergent property.
You brain is just biochemistry and biochemistry isn’t intelligent and yet you are. Think of the number three and all you know about it. There is not a single neuron in your brain that has any idea what the concept of three even means. It’s an emergent behavior.
there’s no emergent behavior in llm’s. your perception that there is, is an anthropomorphism, same as with the idea of prediction. statistically “predicting” the next word based on the frequency of input data isn’t an emergent property, it exists as a staic feature of the algorithm from the start. at a certain level of complexity, llms appear to produce comprehensible text, provided you stop them in time. that’s merely because of the rules of the algorithm. the illusion of intelligence comes merely from being able to select “merged buckets” from the map, which are put together mathematically.
it is a one trick pony that will never become anything else.
Excuse me but what do you think memory is other than a huge database of key-value pairs?
not sure what that is supposed to mean. memory != intelligence, or a book would have it.
Isn’t this like saying the dictionary is knowledgeable because it knows more words than you?
The dictionary is certainly more knowledgeable about words than you.
Let’s abstract if further. Rip every page out of the dictionary and put it through a shredder. All the knowledge is still there, the paper hasn’t been destroyed and the knowledge can be accessed by someone patient, it’s just not in a form that can be easily read.
But is that pile of shredded paper knowledgeable?
I don’t get your analogy. Put your brain through a shredder. Is it still intelligent? All the atoms are still there.
Exactly? Both intelligence and knowledgeably are emergent, you can’t just have all the knowledge in one place and then call it knowledgeable (or intelligent, for that matter). A book (or a chatbot) isn’t knowledgeable, it merely contains knowledge.
I’m not a native speaker, but that sounds like semantics to me. How would you, when chatting, differentiate if the other end is “knowledgeable” or if it “merely contains knowledge”?
The distinction is important because an intelligent being can actually be trusted to do the things they are trained to do. LLMs can’t. The “hallucination” problem comes from these things just being probability engines, they don’t actually know what they’re saying. B follows A, they don’t know why and they don’t care and they can’t care. That’s why LLMs are not actually able to really replace workers, at best they’re productivity software that can (maybe, I’m not convinced) make human workers more productive.
One distinction is that it requires work to actually get real useful knowledge out of these things. You can’t just prompt it and then always expect the answer to be correct or useful, you have to double check everything because it might just make shit up.
The knowledgeably, the intelligence, still comes from the human user.
Are you trying to say that the word ‘knowledgeable’ has some implication of intelligence? Because, depending on context, yes it can. Or are you trying to say that LLMs take a lot of time and/or energy to reassemble their shredded data? To answer your question, yes, the pile of shredded paper contains knowledge, and its accessibility is irrelevant to the conversation.
I’m saying both - a parking lot covered with shredded dictionaries isn’t knowledgeable. It doesn’t know anything.
Your exchange makes me think about the chinese room thought experiment.
The person inside the room has instructions and a dictonary they uses to translate chinese symbols into english words. They never leave the room and never interact with anyone. They just translate single words.
They don’t understand chinese, but the output of the system (the room) gives the impression that there is thinking behind the process. If I remember correctly, it was an argument against the Turing test. The claim was that computers could be extremely efficient into constructing anwsers that seems to be backed by human consciousness/thinking.
Right, so the parking lot covered with shredded dictionaries needs a human mind or else its just a bunch of trash.
The human inside the Chinese room, or in the parking lot picking up and organizing the trash, or in a discussion with a chatbot is still critical to the overall intelligence/knowledgeability of the system. It’s still needed for that spark and, without it, it’s just trash.
I think you are right. IMHO the room actually does speak/understand Chinese, even of the robot/human in the room does not.
There are no neurons in your brain that “understand” English, yet you do. Intelligence is an emergent property. If you “zoom-in” enough everything is just laws of physics and those laws don’t understand English or Chinese.