That’s exactly it. LLMs and their image counterparts have no innate or burgeoning knowledge as people tend to assume. Their singular, core function is to generate their output from literal random noise, like the static you used to see on TV. So the response to the same question will change because the random noise changed, not because the algorithm learned or reconsidered anything. And if you used the same noise, the answer would be identical. No knowledgeable or self-sufficient AI will ever evolve from that.
That’s exactly it. LLMs and their image counterparts have no innate or burgeoning knowledge as people tend to assume. Their singular, core function is to generate their output from literal random noise, like the static you used to see on TV. So the response to the same question will change because the random noise changed, not because the algorithm learned or reconsidered anything. And if you used the same noise, the answer would be identical. No knowledgeable or self-sufficient AI will ever evolve from that.