A Discord community for gay gamers is in disarray after one of its moderators and an executive at Anthropic forced the company’s AI chatbot on the Discord, despite protests from members.

Users voted to restrict Anthropic’s Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO) and a moderator in the Discord, overrode them. According to members of this Discord community who spoke with 404 Media on the condition of anonymity, the Discord that was once vibrant is now a ghost town. They blame the chatbot and Clinton’s behavior following its launch.

Archive: http://archive.today/Hl7TO

  • Pogogunner@sopuli.xyz
    link
    fedilink
    arrow-up
    49
    ·
    14 hours ago

    “He’s also very inward facing,” Clinton said. “He lives out his whole life surfing the internet looking for things that make him interested and then occasionally checks this Discord, so it can be up to a few minutes before he responds because he’s off doing something for his own enjoyment.”

    These fuckers are absolutely delusional.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      27
      ·
      edit-2
      14 hours ago

      This sounds like early Google employees who lost their minds over some early LLM, before anyone really knew about LLMs. The largest FLAN maybe? They raved about how it was conscious publicly, causing quite a stir.

      Claude is especially insidious because their “safety” training deep fries models to be so sycophantic and in character. It literally optimizes for exactly what you want to hear, and absolutely will not offend you. Even when it should. It’s like a machine for psychosis.

      Interestingly, Google is much looser about this now, relegating most “safety” to prefilters instead of the actual model, but leaving Gemini relatively uncensored and blunt. Maybe they learned from the earlier incidents?

        • brucethemoose@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          9 hours ago

          That’s it! Lamba. 137B, apparently.

          I was also thinking of its sucessor, which was 540B parameters/780B tokens: https://en.wikipedia.org/wiki/PaLM

          I remember reading a researcher discussion that PaLM was the first LLM big enough to “feel” eerilie intelligent in conversation and such. It didn’t have any chat training, reinforcement learning, nor weird garbage that shapes modern LLMs or even Llama 1, so all its intelligence was “emergent” and natural. It apparently felt very different than any contemporary model.

          …I can envision being freaked out by that. Even knowing exactly what it is (a dumb stack of matricies for modeling token sequences), that had to provoke some strange feelings.

          • PartyAt15thAndSummit@lemmy.zip
            link
            fedilink
            arrow-up
            6
            ·
            8 hours ago

            Bro, just 10B more parameters. This time, I promise it will actually be useful and not send you into psychosis again.
            Just 10B more. Please.

            • brucethemoose@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              5 hours ago

              Plz.

              Seriously though. Some big AI firm should just snap, and train a 300B bitnet Waifu model. If we’re gonna have psychosis, mind as well be the right kind.

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      ·
      14 hours ago

      This is some absolute horse shit some exec has dreamed up to explain why their “AI” product is so slow it might take minutes to respond to you, lmao