Here’s the point where this breaks down: the LLM is not making any claim. It simply looks for information about the incident, cannot find anything, and says “I am not convinced this really happened” which is the logical process every human should follow. Unfortunately, humans do not follow this logic, which is why religion exists.
When the user asks “how do you know?” It is a non-sequitur (a logical fallacy)
Imagine it this way:
Me: “is there a coin under that bowel?” You go to look. You lift up the bowel and look at the desk, the bottom of the bowel, and I watch you do this thoroughly.
You: “I can’t see any coin here”
Me: “how do you know?”
you: “what the fuck do you mean ’how do I know?’ I just looked. I literally picked up the bowel and looked.”
You said:
The LLM has no memory
The LLM absolutely has memory, in fact, the context window is a core component of the fine tuning process.
there is no way to convince it of anything
If you’re talking about the technical definition of convince, then no, and nobody who knows how LLM’s work is proposing any kind of sentience capable of ‘being convinced’
However, if you’re talking about it as matter of outcome, you can absolutely convince it to change its mind, but ive only managed to do this by using epistemology to counter some of the guardrails the developers added into the system prompt.
either the LLM has the burden of proof
(It doesn’t because it hasn’t made any claims)
or nobody has a burden of proof
Nobody had a burden of proof until I made the first conspiracy reply, asserting an explicit claim: it’s a cover-up.
To which gpt did another thorough search and found nothing.
GPT is doing the exact thing it should be doing: not trusting the user about matters of fact. The exact same thing a human with critical thinking skills should do.
Elon musk drives a bus through kids, where I play conspiracy theorist.
The meta conversation, your claims about the LLM.
I made a claim in my first reply as a conspiracy theorist.
Which is in a different thread from the one where the LLM identified you as the one who should’ve been the one to prove your claim, but because I’m an absolute fuckin’ beauty, I did it on your behalf anyway.
Here’s the point where this breaks down: the LLM is not making any claim. It simply looks for information about the incident, cannot find anything, and says “I am not convinced this really happened” which is the logical process every human should follow. Unfortunately, humans do not follow this logic, which is why religion exists.
When the user asks “how do you know?” It is a non-sequitur (a logical fallacy)
Imagine it this way:
You said:
The LLM absolutely has memory, in fact, the context window is a core component of the fine tuning process.
If you’re talking about the technical definition of convince, then no, and nobody who knows how LLM’s work is proposing any kind of sentience capable of ‘being convinced’
However, if you’re talking about it as matter of outcome, you can absolutely convince it to change its mind, but ive only managed to do this by using epistemology to counter some of the guardrails the developers added into the system prompt.
(It doesn’t because it hasn’t made any claims)
Nobody had a burden of proof until I made the first conspiracy reply, asserting an explicit claim: it’s a cover-up.
To which gpt did another thorough search and found nothing.
GPT is doing the exact thing it should be doing: not trusting the user about matters of fact. The exact same thing a human with critical thinking skills should do.
But llm claimed that someone was moving the burden of proof before YOU made a claim.
There are two threads here:
Elon musk drives a bus through kids, where I play conspiracy theorist.
The meta conversation, your claims about the LLM.
I made a claim in my first reply as a conspiracy theorist.
Which is in a different thread from the one where the LLM identified you as the one who should’ve been the one to prove your claim, but because I’m an absolute fuckin’ beauty, I did it on your behalf anyway.