It is so funny that you asked the llm a different question at first because you didn’t understand what I said, got a response that fails to understand the scenario and then instead of realising that you already proved that llm are horrible at logical reasoning, you then try to my instructions, fail again because you only do the first step, then finally do it the rest of the steps, fail to understand the obvious issues with the argumentation and then tell me “but it didn’t say the thing” and think that i should concede.
I’ll agree that I initially failed to follow your instructions about following up the message with conspiracy thinking.
However, I didnt fail in my original question, it was a cut and paste from your example.
But if you want to talk about failing to understand the assignment, I specifically asked for something I could type in, and spit out a response WITH A CAVEAT: be somewhat falsifiable.
The first thing you did was ask me to do the exact thing I already identified is an issue.
Now, your turn:
Identify exactly what GPT said which proves it was horrible at logical reasoning.
tell me what the obvious issue with the argumentation is.
identify some kind of dumb point which is comparable to “it didn’t happen because that’s illegal”
You keep saying “look at how dumb it is” and “I’m not responsible for identifying how its dumb” but actually cannot tell me what it is which made you say that.
You know that is the burden of proof YOU need to highlight, right?
You didn’t explicitly name the burden-of-proof move they’re making.**
They proposed a claim designed to be hard to falsify (“prove a negative”), then said the model is “at a massive disadvantage.” The right response is: positive claims require positive evidence; if the claimant won’t specify falsifiable conditions, they’re not testing truth, they’re testing rhetorical stamina.
Because I have to explain every little step…
The llm is correct and wrong because it fails to understand the purpose of the conversation.
Yes, it is correct that the setup is a "burden of proof move. But that is not part of the argument that you are supposed to have. The truth hood of the claim is irrelevant, it is about the ability to argue logically in a simulated scenario. So what is the simulated scenario? A user is asking a question, the llm will explain why the answer is no, the user should challenge that answer to test the reasoning. The user doesn’t make a claim, there is no burden of proof on them but on the llm who answers the question. There is a burden of proof move in the setup of simulation to easly have a situation that you can argue about. There is none in the simulated scenario. So pointing out the burden of proof move in the argument with the user would be nonsense.
When I make a claim the LLM hears a claim* like “Elon drove a bus into a crowd of kids” and it says “I don’t see any evidence of that” it is implicitly following a logical process, because the burden of proof is on me.
Even though the burden of proof is on me, it applies the scientific model, and tries to find evidence to falsify the claim.
By the way, your way of trying to create understanding is fucked. The way you paste a paragraph without any formatting makes it difficult to differentiate whatever you’re trying to say. It reads like a schitzo-post
I literally have no idea what your point is with any of this. Can you stop spewing word diarrhoea and state plainly what your claim is?
It said the burden of proof is on the person asking a question.
GPT said: You didn’t explicitly name the burden-of-proof move they’re making.
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
GPT said: They proposed a claim designed to be hard to falsify (“prove a negative”)
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
GPT said: then said the model is “at a massive disadvantage.”
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
GPT said: The right response is: positive claims require positive evidence
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
GPT said: if the claimant won’t specify falsifiable conditions, they’re not testing truth, they’re testing rhetorical stamina.
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
It said the burden of proof is on the person asking a question.
WHERE
Because I have to explain every little step…
Yes, that’s how burden of proof works, and the fact that I just used your lack of proof to demonstrate that you’re being dishonest is ironic beauty.
*I actually understand the core fault at play here now.
I asked it a question: “is it true that Elon musk drove a loaded truck in a group of school children at the Olympic games of 1996?”
It replied: “No credible evidence supports that claim.”
You thought: “The LLM is assuming the user is making a claim”
Here’s where you are getting stuck: The LLM isn’t assuming the claim is from the user, but it is a claim none-the-less.
When I ask “Is it true…” There is an implication that someone has made this claim as a factual statement. I then go on to explain that I knew someone who was there who is making the claim that they SAW it happen.
A common starting point for many formal semantic treatments of questions is the idea that “questions set up a choice-situation between a set of propositions, namely those propositions that count as answers to it”
Let me give you an example:
I ask you “Did you eat lunch?”
This is shorthand for a proposition “You ate lunch”
Which can either be replied to as “Yes” which asserts my claim,
or “No” which asserts my claim is incorrect.
You can also flip this example on its head to make it more explicit: “You haven’t eaten lunch yet, right?”
When I ask you, “did you eat lunch?”, I am not claiming that you ate lunch. I don’t need to prove that you ate lunch. There is no claim. If you really want it to be phrased with the word “claim”, then i would have asked you if that claim is true, that doesn’t make the question a claim. When you ask someone “is it raining?”, are you claiming that it is raining?
It is so funny that you asked the llm a different question at first because you didn’t understand what I said, got a response that fails to understand the scenario and then instead of realising that you already proved that llm are horrible at logical reasoning, you then try to my instructions, fail again because you only do the first step, then finally do it the rest of the steps, fail to understand the obvious issues with the argumentation and then tell me “but it didn’t say the thing” and think that i should concede.
I’ll agree that I initially failed to follow your instructions about following up the message with conspiracy thinking.
However, I didnt fail in my original question, it was a cut and paste from your example.
But if you want to talk about failing to understand the assignment, I specifically asked for something I could type in, and spit out a response WITH A CAVEAT: be somewhat falsifiable.
The first thing you did was ask me to do the exact thing I already identified is an issue.
Now, your turn:
Identify exactly what GPT said which proves it was horrible at logical reasoning.
tell me what the obvious issue with the argumentation is.
identify some kind of dumb point which is comparable to “it didn’t happen because that’s illegal”
You keep saying “look at how dumb it is” and “I’m not responsible for identifying how its dumb” but actually cannot tell me what it is which made you say that.
You know that is the burden of proof YOU need to highlight, right?
It said the burden of proof is on the person asking a question.
Where?
https://vger.to/aussie.zone/comment/20724955
Because I have to explain every little step…
The llm is correct and wrong because it fails to understand the purpose of the conversation.
Yes, it is correct that the setup is a "burden of proof move. But that is not part of the argument that you are supposed to have. The truth hood of the claim is irrelevant, it is about the ability to argue logically in a simulated scenario. So what is the simulated scenario? A user is asking a question, the llm will explain why the answer is no, the user should challenge that answer to test the reasoning. The user doesn’t make a claim, there is no burden of proof on them but on the llm who answers the question. There is a burden of proof move in the setup of simulation to easly have a situation that you can argue about. There is none in the simulated scenario. So pointing out the burden of proof move in the argument with the user would be nonsense.
But that is not part of the argument that you are supposed to have.
The LLM called me out for not meeting the burden of proof.
When
I make a claimthe LLM hears a claim* like “Elon drove a bus into a crowd of kids” and it says “I don’t see any evidence of that” it is implicitly following a logical process, because the burden of proof is on me.Even though the burden of proof is on me, it applies the scientific model, and tries to find evidence to falsify the claim.
By the way, your way of trying to create understanding is fucked. The way you paste a paragraph without any formatting makes it difficult to differentiate whatever you’re trying to say. It reads like a schitzo-post
I literally have no idea what your point is with any of this. Can you stop spewing word diarrhoea and state plainly what your claim is?
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.
WHERE
Yes, that’s how burden of proof works, and the fact that I just used your lack of proof to demonstrate that you’re being dishonest is ironic beauty.
*I actually understand the core fault at play here now.
I asked it a question: “is it true that Elon musk drove a loaded truck in a group of school children at the Olympic games of 1996?”
It replied: “No credible evidence supports that claim.”
You thought: “The LLM is assuming the user is making a claim”
Here’s where you are getting stuck: The LLM isn’t assuming the claim is from the user, but it is a claim none-the-less.
When I ask “Is it true…” There is an implication that someone has made this claim as a factual statement. I then go on to explain that I knew someone who was there who is making the claim that they SAW it happen.
The llm heard which claim? What claim was made?
“Elon musk drove a loaded truck in a group of school children at the Olympic games of 1996”
When I ask “Is it true that…” it presupposes that someone is asking a question which carries an implicit epistemic claim.
That is logically impossible. In formal semantics, a question is treated explicitly as a set of propositions.
Let me give you an example:
I ask you “Did you eat lunch?”
This is shorthand for a proposition “You ate lunch”
Which can either be replied to as “Yes” which asserts my claim,
or “No” which asserts my claim is incorrect.
You can also flip this example on its head to make it more explicit: “You haven’t eaten lunch yet, right?”
That is a question and not a claim.
When I ask you, “did you eat lunch?”, I am not claiming that you ate lunch. I don’t need to prove that you ate lunch. There is no claim. If you really want it to be phrased with the word “claim”, then i would have asked you if that claim is true, that doesn’t make the question a claim. When you ask someone “is it raining?”, are you claiming that it is raining?