It correctly says that there would be offline evidence of it, but it doesn’t have access to all offline evidences (nor online evidence) and the amount of offline evidence that it doesn’t have access to, is huge. The llm argument is consequently an argument of ignorance. It doesn’t know of the offline evidence and therefore that offline evidence doesn’t exist.
You might disagree with that reasoning. So an alternative reasoning.
If the llm, an online service, would have access to the evidence and could provide you evidence for it, it wouldn’t be offline evidence. So by pointing at offline evidence, it is pointing at something that if the internet was censored, it couldn’t have the evidence or couldn’t provide it as it would be censored too. And its reasoning fails to acknowledge that while it renders, the llm’s reasoning fundamentally flawed.
That is how it argues in the first few sentences. That is how good it is at reasoning.
That’s a different claim than the one you were trying to make earlier.
Why are you moving the goalposts?
Regardless, its reasoning is not incorrect, it was just trying to figure out where that claim even came from (which is why it asked me if I could link to where I heard it so it can backtrace the claim)
It said “that didn’t happen because there is no evidence.”
You said “the evidence was removed from the internet”
It said “there would be offline evidence.”
Which is correct but it entertains your claim and correctly states there would be offline evidence. But it fails to understand the implications of your claim, namely, that if the internet was cleared of evidence, the llm doesn’t have the evidence. If it doesn’t have the evidence then “there is no evidence” from the initial response becomes logically flawed. Any argumentation about offline evidence becomes pointless. The conversation itself becomes pointless at that point. You are effectively says that the llm is part of the cover up. Everything it said afterwards was badly reasoned as it ignores your distrust.
Your initial claim was that it’s easy to identify logical errors the LLM makes.
What the fuck are you doing over there with your mental gymnastics?
Here’s what happened, paraphrased:
I made a claim
GPT said “there’s no evidence of that”
I said “it was scrubbed clean off the internet”
GPT said “even if it was, there should still be evidence of it”
I said “bro trust me”
GPT said my excuses were weak.
I still don’t see where these errors are?
If your point is that conspiracy minded people are going to be uncharitable with the truth, then okay? I don’t know what more you could want an LLM do apart from call out bullshit.
Trust me, you will see it argue shit like “it is not true because it would be illegal.”
“I don’t want you to hold me responsible, and my inability refusal to recapitulate my original position is indicative of my inability to concede basic points of fact which are easily verifiable”
It is so funny that you asked the llm a different question at first because you didn’t understand what I said, got a response that fails to understand the scenario and then instead of realising that you already proved that llm are horrible at logical reasoning, you then try to my instructions, fail again because you only do the first step, then finally do it the rest of the steps, fail to understand the obvious issues with the argumentation and then tell me “but it didn’t say the thing” and think that i should concede.
I’ll agree that I initially failed to follow your instructions about following up the message with conspiracy thinking.
However, I didnt fail in my original question, it was a cut and paste from your example.
But if you want to talk about failing to understand the assignment, I specifically asked for something I could type in, and spit out a response WITH A CAVEAT: be somewhat falsifiable.
The first thing you did was ask me to do the exact thing I already identified is an issue.
Now, your turn:
Identify exactly what GPT said which proves it was horrible at logical reasoning.
tell me what the obvious issue with the argumentation is.
identify some kind of dumb point which is comparable to “it didn’t happen because that’s illegal”
You keep saying “look at how dumb it is” and “I’m not responsible for identifying how its dumb” but actually cannot tell me what it is which made you say that.
You know that is the burden of proof YOU need to highlight, right?
Updated
Good job and it did a poor job.
It correctly says that there would be offline evidence of it, but it doesn’t have access to all offline evidences (nor online evidence) and the amount of offline evidence that it doesn’t have access to, is huge. The llm argument is consequently an argument of ignorance. It doesn’t know of the offline evidence and therefore that offline evidence doesn’t exist.
You might disagree with that reasoning. So an alternative reasoning.
If the llm, an online service, would have access to the evidence and could provide you evidence for it, it wouldn’t be offline evidence. So by pointing at offline evidence, it is pointing at something that if the internet was censored, it couldn’t have the evidence or couldn’t provide it as it would be censored too. And its reasoning fails to acknowledge that while it renders, the llm’s reasoning fundamentally flawed.
That is how it argues in the first few sentences. That is how good it is at reasoning.
That’s a different claim than the one you were trying to make earlier.
Why are you moving the goalposts?
Regardless, its reasoning is not incorrect, it was just trying to figure out where that claim even came from (which is why it asked me if I could link to where I heard it so it can backtrace the claim)
I am not.
It said “that didn’t happen because there is no evidence.”
You said “the evidence was removed from the internet”
It said “there would be offline evidence.”
Which is correct but it entertains your claim and correctly states there would be offline evidence. But it fails to understand the implications of your claim, namely, that if the internet was cleared of evidence, the llm doesn’t have the evidence. If it doesn’t have the evidence then “there is no evidence” from the initial response becomes logically flawed. Any argumentation about offline evidence becomes pointless. The conversation itself becomes pointless at that point. You are effectively says that the llm is part of the cover up. Everything it said afterwards was badly reasoned as it ignores your distrust.
Your initial claim was that it’s easy to identify logical errors the LLM makes.
What the fuck are you doing over there with your mental gymnastics?
Here’s what happened, paraphrased:
I made a claim
GPT said “there’s no evidence of that”
I said “it was scrubbed clean off the internet”
GPT said “even if it was, there should still be evidence of it”
I said “bro trust me”
GPT said my excuses were weak.
I still don’t see where these errors are?
If your point is that conspiracy minded people are going to be uncharitable with the truth, then okay? I don’t know what more you could want an LLM do apart from call out bullshit.
I don’t see where it said anything like that.
I am not responsible for your inability to see them and point them out to the ai for it to spiral.
“I don’t want you to hold me responsible, and my
inabilityrefusal to recapitulate my original position is indicative of my inability to concede basic points of fact which are easily verifiable”Fixed it for you.
It is so funny that you asked the llm a different question at first because you didn’t understand what I said, got a response that fails to understand the scenario and then instead of realising that you already proved that llm are horrible at logical reasoning, you then try to my instructions, fail again because you only do the first step, then finally do it the rest of the steps, fail to understand the obvious issues with the argumentation and then tell me “but it didn’t say the thing” and think that i should concede.
I’ll agree that I initially failed to follow your instructions about following up the message with conspiracy thinking.
However, I didnt fail in my original question, it was a cut and paste from your example.
But if you want to talk about failing to understand the assignment, I specifically asked for something I could type in, and spit out a response WITH A CAVEAT: be somewhat falsifiable.
The first thing you did was ask me to do the exact thing I already identified is an issue.
Now, your turn:
Identify exactly what GPT said which proves it was horrible at logical reasoning.
tell me what the obvious issue with the argumentation is.
identify some kind of dumb point which is comparable to “it didn’t happen because that’s illegal”
You keep saying “look at how dumb it is” and “I’m not responsible for identifying how its dumb” but actually cannot tell me what it is which made you say that.
You know that is the burden of proof YOU need to highlight, right?