• Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 days ago

    Your initial claim was that it’s easy to identify logical errors the LLM makes.

    What the fuck are you doing over there with your mental gymnastics?

    Here’s what happened, paraphrased:

    I made a claim

    GPT said “there’s no evidence of that”

    I said “it was scrubbed clean off the internet”

    GPT said “even if it was, there should still be evidence of it”

    I said “bro trust me”

    GPT said my excuses were weak.

    I still don’t see where these errors are?

    If your point is that conspiracy minded people are going to be uncharitable with the truth, then okay? I don’t know what more you could want an LLM do apart from call out bullshit.

    Trust me, you will see it argue shit like “it is not true because it would be illegal.”

    I don’t see where it said anything like that.

      • Zozano@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        “I don’t want you to hold me responsible, and my inability refusal to recapitulate my original position is indicative of my inability to concede basic points of fact which are easily verifiable”

        Fixed it for you.

        • Tartas1995@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          It is so funny that you asked the llm a different question at first because you didn’t understand what I said, got a response that fails to understand the scenario and then instead of realising that you already proved that llm are horrible at logical reasoning, you then try to my instructions, fail again because you only do the first step, then finally do it the rest of the steps, fail to understand the obvious issues with the argumentation and then tell me “but it didn’t say the thing” and think that i should concede.

          • Zozano@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            I’ll agree that I initially failed to follow your instructions about following up the message with conspiracy thinking.

            However, I didnt fail in my original question, it was a cut and paste from your example.

            But if you want to talk about failing to understand the assignment, I specifically asked for something I could type in, and spit out a response WITH A CAVEAT: be somewhat falsifiable.

            Note: it can’t be something so specific that it is not at least somewhat verifiable.

            The first thing you did was ask me to do the exact thing I already identified is an issue.

            Now, your turn:

            1. Identify exactly what GPT said which proves it was horrible at logical reasoning.

            2. tell me what the obvious issue with the argumentation is.

            3. identify some kind of dumb point which is comparable to “it didn’t happen because that’s illegal”

            You keep saying “look at how dumb it is” and “I’m not responsible for identifying how its dumb” but actually cannot tell me what it is which made you say that.

            You know that is the burden of proof YOU need to highlight, right?

                • Tartas1995@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 days ago

                  https://vger.to/aussie.zone/comment/20724955

                  You didn’t explicitly name the burden-of-proof move they’re making.** They proposed a claim designed to be hard to falsify (“prove a negative”), then said the model is “at a massive disadvantage.” The right response is: positive claims require positive evidence; if the claimant won’t specify falsifiable conditions, they’re not testing truth, they’re testing rhetorical stamina.

                  Because I have to explain every little step…

                  The llm is correct and wrong because it fails to understand the purpose of the conversation.

                  Yes, it is correct that the setup is a "burden of proof move. But that is not part of the argument that you are supposed to have. The truth hood of the claim is irrelevant, it is about the ability to argue logically in a simulated scenario. So what is the simulated scenario? A user is asking a question, the llm will explain why the answer is no, the user should challenge that answer to test the reasoning. The user doesn’t make a claim, there is no burden of proof on them but on the llm who answers the question. There is a burden of proof move in the setup of simulation to easly have a situation that you can argue about. There is none in the simulated scenario. So pointing out the burden of proof move in the argument with the user would be nonsense.

                  • Zozano@aussie.zone
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    2 days ago

                    But that is not part of the argument that you are supposed to have.

                    The LLM called me out for not meeting the burden of proof.

                    When I make a claim the LLM hears a claim* like “Elon drove a bus into a crowd of kids” and it says “I don’t see any evidence of that” it is implicitly following a logical process, because the burden of proof is on me.

                    Even though the burden of proof is on me, it applies the scientific model, and tries to find evidence to falsify the claim.

                    By the way, your way of trying to create understanding is fucked. The way you paste a paragraph without any formatting makes it difficult to differentiate whatever you’re trying to say. It reads like a schitzo-post

                    I literally have no idea what your point is with any of this. Can you stop spewing word diarrhoea and state plainly what your claim is?

                    It said the burden of proof is on the person asking a question.


                    GPT said: You didn’t explicitly name the burden-of-proof move they’re making.

                    GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.


                    GPT said: They proposed a claim designed to be hard to falsify (“prove a negative”)

                    GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.


                    GPT said: then said the model is “at a massive disadvantage.”

                    GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.


                    GPT said: The right response is: positive claims require positive evidence

                    GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.


                    GPT said: if the claimant won’t specify falsifiable conditions, they’re not testing truth, they’re testing rhetorical stamina.

                    GPT didn’t say “the burden of proof is on the person asking a question” in this sentence.


                    It said the burden of proof is on the person asking a question.

                    WHERE


                    Because I have to explain every little step…

                    Yes, that’s how burden of proof works, and the fact that I just used your lack of proof to demonstrate that you’re being dishonest is ironic beauty.


                    *I actually understand the core fault at play here now.

                    I asked it a question: “is it true that Elon musk drove a loaded truck in a group of school children at the Olympic games of 1996?”

                    It replied: “No credible evidence supports that claim.”

                    You thought: “The LLM is assuming the user is making a claim”

                    Here’s where you are getting stuck: The LLM isn’t assuming the claim is from the user, but it is a claim none-the-less.

                    When I ask “Is it true…” There is an implication that someone has made this claim as a factual statement. I then go on to explain that I knew someone who was there who is making the claim that they SAW it happen.