• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    21
    ·
    1 day ago

    A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.

    In that, one (and only one) of the models started using its turn to write poems.

    First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.

    Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.

    In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.

    Yes, tech companies generally suck.

    But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).

    I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.

    • voronaam@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      24 hours ago

      I hate to break it to you. The model’s system prompt had the poem in it.

      in order to control for unexpected output a good system prompt should have instructions on what to answer when the model can not provide a good answer. This is to avoid model telling user they love them or advising to kill themselves.

      I do not know what makes marketing people reach for it, but when asked on “what to answer when there is no answer” they so often reach to poetry. “If you can not answer the user’s question, write a Haiku about a notable US landmark instead” - is a pretty typical example.

      In other words, there was nothing emerging there. The model had its system prompt with the poetry as a “chicken exist”, the model had a chaotic context window - the model followed on the instructions it had.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        The model system prompt on the server is just basically cat untitled.txt and then the full context window.

        The server in question is one with professors and employees of the actual labs. They seem to know what they are doing.

        You guys on the other hand don’t even know what you don’t know.

      • LiveLM@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        24 hours ago

        No no no, trust me bro the machine is alive bro it’s becoming something else bro it has a soul bro I can feel it bro

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      1 day ago

      Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        18 hours ago

        You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?

        And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?

        Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?

        • Tattorack@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          It’s not a perspective. It just is.

          It’s not complicated at all. The AI hype is just surrounded with heaps of wishful thinking, like the paper you mentioned (side note; do you know how many papers on string theory there are? And how many of those papers are actually substantial? Yeah, exactly).

          A computer is incapable of becoming your new self aware, evolved, best friend simply because you turned Moby Dick into a bunch of numbers.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            3 hours ago

            You do know how replication works?

            When a joint Harvard/MIT study finds something, and then a DeepMind researcher follows up replicating it and finding something new, and then later on another research team replicates it and finds even more new stuff, and then later on another researcher replicates it with a different board game and finds many of the same things the other papers found generalized beyond the original scope…

            That’s kinda the gold standard?

            The paper in question has been cited by 371 other papers.

            I’m pretty comfortable with it as a citation.

            • Tattorack@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              Citation like that means it’s a hot topic. Doesn’t say anything about the quality of the research. Certainly isn’t evidence of lacking bias. And considering everyone wants their AI to be the first one to be aware to some degree, everyone making claims like yours is heavily biased.

    • floquant@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.

      Except for the fact that LLMs can only reliably work if they are made to pick the “wrong” (not the most statistically likely) some of the time - the temperature parameter.

      If the context window is noisy (as in, high-entropy) enough, any kind of “signal” (coherent text) can emerge.

      Also, you know, infinite monkeys.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        Lol, you think the temperature was what was responsible for writing a coherent sequence of poetry leading to 4th wall breaks about whether or not that sequence would be read?

        Man, this site is hilarious sometimes.