In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between AI development and the Manhattan Project (Atomic Power) is factually incorrect. We also investigate the sponsor, Control AI, and expose how industry propaganda is shifting focus toward hypothetical extinction risks to distract from real-world issues like disinformation and regulatory accountability, and fact-check OpenAI’s claims about the International Math Olympiad and Anthropic’s AI Alignment bioweapon tests.

00:00 I wish this wasn’t happening

00:32 SciShow’s Lie Overview

01:58 Intro

02:15 Biggest Lie on the SciShow Video

04:44 Biggest Omission in the SciShow Video

05:56 The “Statement on AI” that SciShow Omits

08:57 Summary of Most Important Points

09:23 Claim about International Math Olympiad Medal

09:50 Misleading Example about AI Alignment

11:20 Downplaying “practical and visible” problems

11:53 Essay I debunked from Anthropic CEO

12:06 Video on Hank’s Personal Channel

12:31 A Plea for SciShow and others to do better

13:02 Wrap-up

  • Valmond@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    What a cute thought!

    No one knows how “everything” works in old monolithic software. You just have to try and see what happens, and often you just doesn’t touch certain codebases because nobody really know the ramifications if you change something in them. Windiws 11 is probably way worse than any LLM. Try to share a simple folder on a simple home network and you’ll see some of the cruft.

    Source: have worked on 30-40 year old monolithic software. In not one of those projects were there a single “engineer” who knew it all.

    Neural networks has their fuzzy part of course, but software became not fully understandable a long time ago. IMO.

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      Of course, no single person fully understand the entirety of Windows. But I hope the people working with Windows understands at least a part of it.

      The thing with LLMs is that no one really understands the purpose of one single neuron, how it relates to all other neurons, and how they together seem to be able to generalize high level concepts like golden gate bridge. It’s just too much to map it out.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        We do know how a single “neuron” relates to other neurons, it’s in the model. But what gets complicated is the vast amount of them, of course.

        So yes, we don’t intrinsically get to understand it all, but I think we can understand what it does, a bit like windows 😁/j.

        Fascinating subject, and we’re just scratching the beginning IMO.