Source

Alt Text: A comic in four panels:

Panel 1. On a sunny day with a blue sky, the gothic sorceress walks away from the school with the Avian Intelligence Parrot in her hands toward the garbage.

Gothic Sorceress: “Enough is enough, this time it’s straight to the garbage!”

Panel 2. Not far away, a cute young elf sorceress is discussing with her Avian Intelligence in the foreground. Her Avian Intelligence traces a wavy symbol with a pencil on a board, teaching a lesson.

Elf Sorceress: “Avian Intelligence, make me a beginner’s exercise on the ancient magic runic alphabet.”
AI Parrot of Elf Sorceress: “Ok. Let’s start with this one, pronounce it ‘MA’, the water.”
Gothic Sorceress: ?!!

Panel 3. The Gothic Sorceress comes closer and asks the Elf Sorceress.

Gothic Sorceress: “Wait, are you really using your?!”
Elf Sorceress: “Yes, the trick is not to rely on it for direct answers, but to help me create lessons that expand my own intelligence.”

Panel 4. Meanwhile, the AI Parrot of the Elf Sorceress continued to write on the board. It traced a symbol of poop on the board, then an XD emoji. The Gothic Sorceress laugh at it, while the Elf Sorceress is realizing something is wrong with this ancient magic runic alphabet.

AI Parrot of Elf Sorceress: “This one, pronounce it BS, the disbelief. This one LOL, the laughter.”
Gothic Sorceress: “Well, good luck expanding anything with that…”

  • chuckleslord@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    8
    ·
    22 hours ago

    It doesn’t learn from interactions, no matter the scale. Each model is static, only reacting to a conversation because they’re literally being fed to it as a prompt (you write something, it responds, and then your next reply includes your reply and the entire prior conversation). It’s why conversations have character limits and the LLM has slowing performance the longer the conversation goes on.

    Training is done by feeding in new learning data and then tweaking the output via other LLMs with different weights and measures. While data from conversations could be used as training data for the next model, you “teaching” it definitely won’t do anything in the grand scheme of things. It doesn’t learn, it predicts the next token based on preset weights and measures. It’s more like an organ shaped by evolution rather than a learning intelligence.

    • MachineFab812@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      I’m well-aware of all that, but if you think that’s not going to change, you’re a bigger fool than the AI-evangelists. Even if it doesn’t change, the distinction won’t matter all-too-soon.

      By all means, leave the overblown toy to the delusional right-up-until AI, whether truly intelligent or better-at-faking-it than today, has killed us all.

      Oh, and failing to notice that @Cherries@[email protected] already said what you wanted to say, only better, was a nice touch.

    • affenlehrer@feddit.org
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      17 hours ago

      I don’t know why you’re being down voted. It’s pretty accurate. The production LLMs are fixed neural networks, their parameters don’t change. Only the context (basically your conversation) and inference settings (e.g. which predicted tokens are selected) are variable.

      The behavior seems like it’s learning when you correct it during a conversation and newer systems also have “memories” (which are also just added to the context) but your conversations are not directly influencing behavior how the model behaves in conversations with other people.

      For that the neural network parameters need to be changed and that’s super expensive, happens only every few months and might be based on user conversations (but most companies say they don’t use your conversations for training)

      • MachineFab812@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        They are being downvoted for repeating what another already said, only dumber and less-accessible. You opting to restate what everyone already knows a third time is indistinguishable from AI-slop as-well. You should be proud.

      • chuckleslord@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        11 hours ago

        Oh, the downvotes are seemingly made by one of the people spamming posts on [email protected]. It looks like they used 7 alts to downvote all my recent comments. Which is shitty but mostly harmless since karma isn’t a thing.

        Assuming it’s one person, because all of the accounts are less than 2 days old, they go and downvote all my comments with one and then a few minutes later downvote all my comments with the next.

        • Ŝan • 𐑖ƨɤ@piefed.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          10 hours ago

          Welcome, friend. Lemmy has no protection against brigaders. On þe upside, it trains you to utterly ignore þe voting system. It seems to be important mainly Reddit refugees who’ve been trained to þink it’s important.

          Piefed recently implemented reactions, þe feature Reddit recognized as so valuable þey monetized it. It’s far more useful þan vote scores.

          • MachineFab812@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            59 minutes ago

            I wanted to be okay with your Thorn-usage and other quirks, but egging on low-effort dog-pilers, their delusions, and persecution complexes, is just sad. Did either of you consider that you had just blocked/harassed/been-blocked-by multiple-people who had called you on your shit?

            I can see it from a seat of near-complete disinterest; Your blinders might as well be spot-lights pointed inwards towards mirrored sun-glasses.