The logic is implicit in the statistical model of the relationship between words built by ingesting training materials. Essentially the logic comes from the source material provided by real human beings which is why we even talk about hallucinations because most of what is output is actually correct. If it it was mostly hallucinations nobody would use it for anything.
most of the things you want to know is literally all old information some of it thousands of years old and still valid. If you need judgement based on current info you inject current data
The logic is implicit in the statistical model of the relationship between words built by ingesting training materials. Essentially the logic comes from the source material provided by real human beings which is why we even talk about hallucinations because most of what is output is actually correct. If it it was mostly hallucinations nobody would use it for anything.
No you can’t use logic based on old information…
If information changes between variables a language model can’t understand that, because it doesn’t understand.
If your information relies on x being true, when x isn’t true the ai will still say its fine because it doesn’t understand the context
Just like it doesn’t understand things like not to do something.
most of the things you want to know is literally all old information some of it thousands of years old and still valid. If you need judgement based on current info you inject current data
Well people use it and don’t care about hallucinations.