In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between AI development and the Manhattan Project (Atomic Power) is factually incorrect. We also investigate the sponsor, Control AI, and expose how industry propaganda is shifting focus toward hypothetical extinction risks to distract from real-world issues like disinformation and regulatory accountability, and fact-check OpenAI’s claims about the International Math Olympiad and Anthropic’s AI Alignment bioweapon tests.

00:00 I wish this wasn’t happening

00:32 SciShow’s Lie Overview

01:58 Intro

02:15 Biggest Lie on the SciShow Video

04:44 Biggest Omission in the SciShow Video

05:56 The “Statement on AI” that SciShow Omits

08:57 Summary of Most Important Points

09:23 Claim about International Math Olympiad Medal

09:50 Misleading Example about AI Alignment

11:20 Downplaying “practical and visible” problems

11:53 Essay I debunked from Anthropic CEO

12:06 Video on Hank’s Personal Channel

12:31 A Plea for SciShow and others to do better

13:02 Wrap-up

  • James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    1
    ·
    4 days ago

    LLMs ≠ AI. I wish more people in the media would realize that even the most advanced LLM possible cannot achieve “AGI”. That is just not how they work. It’s like saying that if you make a car that can spin it’s wheels fast enough then it can go to space. It’s not what wheels do.

    • undeffeined@lemmy.ml
      link
      fedilink
      arrow-up
      10
      ·
      3 days ago

      Cannot upvote this enough. These tools are not intelligent!! Sure, they can be useful to specialists that check the outputs and select what is correct. For the masses like its being pushed? Hell no!

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      3 days ago

      LLMs are AI. No, they’re not going to get to “AGI”, but this idea that they aren’t connected doesn’t match how the field has evolved.

      If you’re unaware of how the MIT model railroading club is one of the most important groups in the history of AI, then do some reading.

        • Frezik@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          3 days ago

          Not how it works.

          The field of AI has been about making computers do things they couldn’t before. Even if they’re just “predicting the next token”, LLMs are a significant leap over Markov Chains (which also predict the next token, but produce output that’s more funny than useful).

          Again, if you’re unaware of the history of MIT CSAIL, then you really shouldn’t be opining on what is and isn’t AI.

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            3 days ago

            There’s a difference between the field developing more advanced technology towards AI and calling every piece of that AI. Yes, this is part of a larger field that has worked on this for decades. The previous stuff wasn’t called AI, and this shouldn’t be either. It’s only the companies selling a product who started that.

            • Frezik@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              3 days ago

              Would you consider Conway’s Game of Life to be AI? Because the field certainly did back in the day, and it’s less impressive than LLMs.

              • Cethin@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 days ago

                No they fucking didn’t. That’s absurd. They may have talked philosophically about if it was alive. No one thought it was intelligent. You can look at the code and know that. They called it AI in the same way video games do maybe, not in the way the academic field does.

                • Frezik@lemmy.blahaj.zone
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  3 days ago

                  It was developed by academics in the first place. It’s AI because it was developed by AI researchers.

                  That’s how it works. You build knowledge by making these little pieces. LLMs are one of those pieces. It won’t get to full human intelligence on its own, but it might be part of what gets there.

                  • Cethin@lemmy.zip
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    3 days ago

                    Not everything AI researchers develop is suddenly AI. That’s my point, and they know that. What you’re implying is that as soon as the field developed AI existed, and not before. It being made by AI researchers is not the definition of AI.

                    Its also not an issue with it not being full human intelligence. It isn’t intelligent at all. It doesn’t think about what it outputs. It’s just a statistical model. It’s a very advanced statistical model that creates the appearance of intelligence, but it isn’t intelligent.