• Tartas1995@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Okay. So if an user ask a llm “is it true that Elon musk drove into a group of children at the Olympics 1996?” The user has no burden of proof because the question is just like “is it true that god exists?” And the user doesn’t try to convince the llm.

      And when the llm answers “no, because …”, llm is making a claim and might has a burden of proof if we believe that the llm is trying to convince the user.

      And when the user challenges the response by e.g. asking “how do you know?”, the user is not making a claim; and even if it implies an implicit claim, the user doesn’t have a burden of proof as long as there is no intention to convince.

      The intention would be quite unlikely as the user is aware that the llm has no beliefs or memory, as it is just a fancy text completion, consequently there is no possible way to convince it of anything anyway.

      So either the llm has a burden of proof because it is trying to convince the user, or no one has a burden of proof.

      So what does the llm mean when it says that someone is trying to move the burden of proof to someone else?

      • Zozano@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Here’s the point where this breaks down: the LLM is not making any claim. It simply looks for information about the incident, cannot find anything, and says “I am not convinced this really happened” which is the logical process every human should follow. Unfortunately, humans do not follow this logic, which is why religion exists.

        When the user asks “how do you know?” It is a non-sequitur (a logical fallacy)

        Imagine it this way:

        Me: “is there a coin under that bowel?” You go to look. You lift up the bowel and look at the desk, the bottom of the bowel, and I watch you do this thoroughly.

        You: “I can’t see any coin here”

        Me: “how do you know?”

        you: “what the fuck do you mean ’how do I know?’ I just looked. I literally picked up the bowel and looked.”

        You said:

        The LLM has no memory

        The LLM absolutely has memory, in fact, the context window is a core component of the fine tuning process.

        there is no way to convince it of anything

        If you’re talking about the technical definition of convince, then no, and nobody who knows how LLM’s work is proposing any kind of sentience capable of ‘being convinced’

        However, if you’re talking about it as matter of outcome, you can absolutely convince it to change its mind, but ive only managed to do this by using epistemology to counter some of the guardrails the developers added into the system prompt.

        either the LLM has the burden of proof

        (It doesn’t because it hasn’t made any claims)

        or nobody has a burden of proof

        Nobody had a burden of proof until I made the first conspiracy reply, asserting an explicit claim: it’s a cover-up.

        To which gpt did another thorough search and found nothing.

        GPT is doing the exact thing it should be doing: not trusting the user about matters of fact. The exact same thing a human with critical thinking skills should do.

          • Zozano@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            There are two threads here:

            1. Elon musk drives a bus through kids, where I play conspiracy theorist.

            2. The meta conversation, your claims about the LLM.

            I made a claim in my first reply as a conspiracy theorist.

            Which is in a different thread from the one where the LLM identified you as the one who should’ve been the one to prove your claim, but because I’m an absolute fuckin’ beauty, I did it on your behalf anyway.