there’s no emergent behavior in llm’s. your perception that there is, is an anthropomorphism, same as with the idea of prediction. statistically “predicting” the next word based on the frequency of input data isn’t an emergent property, it exists as a staic feature of the algorithm from the start. at a certain level of complexity, llms appear to produce comprehensible text, provided you stop them in time. that’s merely because of the rules of the algorithm. the illusion of intelligence comes merely from being able to select “merged buckets” from the map, which are put together mathematically.
it is a one trick pony that will never become anything else.
there’s no emergent behavior in llm’s. your perception that there is, is an anthropomorphism, same as with the idea of prediction. statistically “predicting” the next word based on the frequency of input data isn’t an emergent property, it exists as a staic feature of the algorithm from the start. at a certain level of complexity, llms appear to produce comprehensible text, provided you stop them in time. that’s merely because of the rules of the algorithm. the illusion of intelligence comes merely from being able to select “merged buckets” from the map, which are put together mathematically.
it is a one trick pony that will never become anything else.