Math, physics, the fundamental programming limitations of LLMs in general. If we’re ever gonna actually develop an AGI, it’ll come about along a completely different pathway than LLMs and algorithmic generative “AI”.
Based on what LLMs are. They predict token (usually word) probability. They can’t think, they can’t understand, they can’t question things. If you ask one for a seahorse emoji, it has a seizure instead of just telling you that no such emoji exists.
Based on what?
Math, physics, the fundamental programming limitations of LLMs in general. If we’re ever gonna actually develop an AGI, it’ll come about along a completely different pathway than LLMs and algorithmic generative “AI”.
Based on what LLMs are. They predict token (usually word) probability. They can’t think, they can’t understand, they can’t question things. If you ask one for a seahorse emoji, it has a seizure instead of just telling you that no such emoji exists.