• 11 Posts
  • 514 Comments
Joined 4 years ago
cake
Cake day: December 20th, 2021

help-circle








  • The problem is that an LLM is a language model, not an objective reality model, so the best it can do is estimate the probability of a particular sentence appearing in the language, but not the probability that the sentence represents a true statement according to our objective reality.

    They seem to think that they can use these confidence measures to filter the output when it is not confident of being correct, but there are an infinite number of highly probable sentences in a language which are false in reality. An LLM has no way of distinguishing between unlikely and false, or between likely and true.



  • This is completely missing the point, which is that qualia are only experienced by the conscious mind. They cannot be measured by anything other than the mind of the person experiencing them.

    Measuring that the brain activity is the same is not sufficient to prove this unanswerable philosophical question. You would have to also prove that different minds have the same experience while exhibiting the same neural activity - a problem which reduces to the same question: is my experience of blue the same as yours?