• DandomRude@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    5 hours ago

    Although Grok’s manipulation is so blatantly obvious, I don’t believe that most people will come to realize that those who control LLMs will naturally use this power to pursue their interests.

    They will continue to use ChatGPT and so on uncritically and take everything at face value because it’s so nice and easy, overlooking or ignoring that their opinions, even their reality, are being manipulated by a few influential people.

    Other companies are more subtle about it, but from OpenAI to MS, Google, and Anthropic, all cloud models are specifically designed to control people’s opinions—they are not objective, but the majority of users do not question them as they should, and that is what makes them so dangerous.

    • khepri@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      4 hours ago

      It’s why I trust my random unauditable chinese matrix soup over my random unauditable american matrix soup frankly

        • khepri@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          4 hours ago

          naw, I mean more that the kind of people who uncritically would take everything a chatbot says a face value are probably better off being in chatGPTs little curated garden anyway. Cause people like that are going to immediately get grifted into whatever comes along first no matter what, and a lot of those are a lot more dangerous to the rest of us that a bot that won’t talk great replacement with you.

          • DandomRude@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            Ahh, thank you—I had misunderstood that, since Deepseek is (more or less) an open-source LLM from China that can also be used and fine-tuned on your own device using your own hardware.

            • ranzispa@mander.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 hours ago

              Do you have a cluster with 10 A100 lying around? Because that’s what it gets to run deepseek. It is open source, but it is far from accessible to run on your own hardware.

              • DandomRude@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Yes, that’s true. It is resource-intensive, but unlike other capable LLMs, it is somewhat possible—not for most private individuals due to the requirements, but for companies with the necessary budget.

                • FauxLiving@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  2 hours ago

                  They’re overestimating the costs. 4x H100 and 512GB DDR4 will run the full DeepSeek-R1 model, that’s about $100k of GPU and $7k of RAM. It’s not something you’re going to have in your homelab (for a few years at least) but it’s well within the budget of a hobbyist group or moderately sized local business.

                  Since it’s an open weights model, people have created quantized versions of the model. The resulting models can have much less parameters and that makes their RAM requirements a lot lower.

                  You can run quantized versions of DeepSeek-R1 locally. I’m running deepseek-r1-0528-qwen3-8b on a machine with an NVIDIA 3080 12GB and 64GB RAM. Unless you pay for an AI service and are using their flagship models, it’s pretty indistinguishable from the full model.

                  If you’re coding or doing other tasks that push AI it’ll stumble more often, but for a ‘ChatGPT’ style interaction you couldn’t tell the difference between it and ChatGPT.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      There’s huge risk here but I don’t think most are designed to control people’s opinions. I think most are chasing the cheapest option and it’s expensive to have people upset about racist content so they try to train around that sometimes too much leading to black Nazi images etc.

      But yeah, it is a power that will get abused by more than just grok

      • DandomRude@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        4 hours ago

        I use various AI models and I repeatedly notice that certain information is withheld or misrepresented, even though it is freely available in abundance and is therefore part of the training data.

        I don’t think this is a coincidence, especially since the operators of all cloud LLMs are so business-minded.

          • DandomRude@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            For example, objective information about Israel’s actions in Gaza. The International Criminal Court issued arrest warrants against leading members of the government a long time ago, and the UN OHCHR classifies the actions of the State of Israel as genocide. However, these facts are by no means presented as clearly as would be appropriate given the importance of these institutions. Instead, when asked whether Israel is committing genocide, one receives vague, meaningless answers. Only when specifically asked whether numerous reputable institutions actually classify Israel’s actions as genocide do most LLMs reveal that much, if not all, evidence points to this being the case. In my opinion, this is a deliberate method of obscuring reality, as the vast majority of users will not or cannot ask questions if they are unaware of the UN OHCHR’s assessment or do not know that arrest warrants have been issued against leading members of the Israeli government on suspicion of war crimes (many other reputable institutions have come to the same conclusion as the UN OHCHR and the International Criminal Court).

            Another example: if you ask whether it is legally permissible to describe Donald Trump as a rapist, you will be told that this is defamation. However, a judge in the Carroll case has explicitly stated that this description applies to Trump – so it is in fact legally permissible to describe him as such. Again, this information is only available upon explicit request, if at all. This also distorts reality for people who are not yet informed. However, since many people initially seek information from LLMs, this leads to them being misinformed because they lack the background knowledge to ask explicit follow-up questions when given misleading answers.

            Given the influence of both Israel and the US president, I cannot help but suspect that there is an intention behind this.