• Nalivai@lemmy.world
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    7
    ·
    4 days ago

    You’re attributing a lot of agency to the fancy autocomplete, and that’s big part of the overall problem.

    • Artisian@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      We attribute agency to many many systems that are not intelligent. In this metaphorical sense, agency just requires taking actions to achieve a goal. It was given a goal: raise money for charity via doing acts of kindness. It chose an (unexpected!) action to do it.

      Overactive agency metaphors really aren’t the problem here. Surely we can do better than backlash at the backlash.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        We attribute agency to everything, absolutely. But previously, we understood that it’s tongue-in-cheek to some extend. Now we got crazy and do it for real. Like, a lot of people talk about their car as if it’s alive, they gave it a name, they talk about it’s character and how it’s doing something “to spite you” and if it doesn’t start in cold weather, they ask it nicely and talk to it. But when you start believing for real that your car is a sentient object that talks to you and gives you information, we always understood that this is the time when you need to be committed to a mental institution.
        With chatbots this distinction got lost, and people started behaving as if it’s actually sentient. It’s not a metaphor anymore. This is a problem, even if it’s not the problem.

        • Artisian@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I think this confuses the ‘it’s a person’ metaphor with the ‘it wants something’ metaphor, and the two are meaningfully distinct. The use of agent here in this thread is not in the sense of “it is my friend and deserves a luxury bath”, it’s in the sense of “this is a hard to predict system performing tasks to optimize something”.

          It’s the kind of metaphor we’ve allowed in scientific teaching and discourse for centuries (think: “gravity wants all master smashed together”). I think it’s use is correct here.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            36 minutes ago

            I wouldn’t have any problem with this kind of metaphors, I use it myself about everything all the time, if there wasn’t a substantial portion of population that actually did the jump to the “it’s saying something coherent therefore it’s a person that wants to help me and I exclusively talk to him now, his name is mekahitler by the way”.
            I am afraid that by normalizing metaphors here we’re doing some damage, because as it turns out, so many people don’t get metaphors.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      16
      ·
      3 days ago

      You seem pretty confident in your position. Do you mind sharing where this confidence comes from?

      Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?

      I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 days ago

        That’s the fun thing: burden of proof isn’t on me. You seem to think that if we throw enough numbers at the wall, the resulting mess will become sentient any time now. There is no indication of that. The hypothesis that you operate on seems to be that complexity inevitably leads to not just any emerged phenomenon, but also to a phenomenon that you predicted would emerge. This hypotheses was started exclusively on idea that emerged phenomena exist. We spent significant amount of time running world-wide experiment on it, and the conclusion so far, if we peel the marketing bullshit away, is that if we spend all the computation power in the world on crunching all the data in the world, the autocomplete will get marginally better in some specific cases. And also that humans are idiots and will anthropomorphize anything, but that’s a given.
        It doesn’t mean this emergent leap is impossible, but mainly because you can’t really prove the negative. But we’re no closer to understanding the phenomenon of agency than we were hundred years ago.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          edit-2
          2 days ago

          Ok, second round of questions.

          What kinds of sources would get you to rethink your position?

          And is this topic a binary yes/no, or a gradient/scale?

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            The golden standard for me, about anything really, is a number of published research from relevant experts that are not affiliated with the entities invested in the outcome of the study, forming some kind of scientific consensus. The question of sentience is a bit of a murky water, so I, as a random programmer, can’t tell you what the exact composition of those experts and their research should be, I suspect it itself is a subject for a study or twelve.
            Right now, based on my understanding of the topic, there is a binary sentience/non sentience switch, but there is a gradient after that. I’m not sure we know enough about the topic to understand the gradient before this point, I’m sure it should exist, but since we never actually made one or even confirmed that it’s possible to make one, we don’t know much about it.

      • Best_Jeanist@discuss.online
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        3 days ago

        Well that’s simple, they’re Christians - they think human beings are given souls by Yahweh, and that’s where their intelligence comes from. Since LLMs don’t have souls, they can’t think.