In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between AI development and the Manhattan Project (Atomic Power) is factually incorrect. We also investigate the sponsor, Control AI, and expose how industry propaganda is shifting focus toward hypothetical extinction risks to distract from real-world issues like disinformation and regulatory accountability, and fact-check OpenAI’s claims about the International Math Olympiad and Anthropic’s AI Alignment bioweapon tests.

00:00 I wish this wasn’t happening

00:32 SciShow’s Lie Overview

01:58 Intro

02:15 Biggest Lie on the SciShow Video

04:44 Biggest Omission in the SciShow Video

05:56 The “Statement on AI” that SciShow Omits

08:57 Summary of Most Important Points

09:23 Claim about International Math Olympiad Medal

09:50 Misleading Example about AI Alignment

11:20 Downplaying “practical and visible” problems

11:53 Essay I debunked from Anthropic CEO

12:06 Video on Hank’s Personal Channel

12:31 A Plea for SciShow and others to do better

13:02 Wrap-up

  • very_well_lost@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    4 days ago

    We have no idea how it works

    I’m so sick of seeing this bullshit.

    You may not know how it works, and the AI industry probably wants you to think that no one knows how it works, but it’s just not true.

    Generative pre-trained transformers are well understood, well documented, and there’s no shortage of resources freely available online to teach you how they work. Ditto for other advanced AI systems.

    They are complex, sure, but they’re not inscrutable. Saying that no one knows how AI works is like saying no one knows how the weather works — which again, is simply not true. Weather is complicated and its behavior is hard to predict because of the massive number of variables involved, but we know how it works at a fundamental level. It’s not magic, it’s not angels bowling or whatever.

    AI is just software, and we know how it fucking works.

    • arnitbier@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      arrow-down
      10
      ·
      edit-2
      2 days ago

      So general understanding seems to be that LLMs which are almost comprehensively understood is not the same as artificial intelligence which is really only conceptually understood for the most part. Still too new and not fully tested, like chemistry when it was still really being worked out. We know definitely some of how it might work but MOST of it will be being developed and debated over a long period of time still to come. So its extremely fair to say we do not know how long that will take and how fast it will develop because we don’t have enough information to establish that yet. Including not having the minutia of how the DEVELOPED systems truly operate which is what most people are taught about these days and (I think) they were pointing out.

      So on that note it seems worth bringing up what the bigger problem here that pissing people off so much, the TERMs used to describe the issue and lack of concrete and agreed upon understanding that MOST people share about the subject that were even discussing makes this tough to get through without everybody being wrong in some capacity or another.

      So you are def kinda wrong, they might be but I dont think that they are really. And to some degree people in the damn field of AI right fucking now will be wrong too

      So, grace brother, remember the learning process and if your goal is to educate, there are more effective ways then that. But also please keep participating, remembering that most people here are simply trying to add their relevant experience and should be treated as such.

      Edit: Yall are… AGI doesn’t exist and we don’t know how to make it and we don’t know how fast ANI is going to develop even slightly. Then there’s how machine-learning so-called AI is NOT generally considered AI by the actual developed standards in the field of AI, and just because they UNDERSTAND how machine learning works really well and have some blueprints for what he calls “advanced AI systems” which are still just machine learning systems doesn’t suddenly change any of this.

      We don’t know how intelligence works. We can’t know how AI works yet. We know system does this if we do that. Reminding of early chemistry fields.

      So the problem HERE becomes that AI, AI and AI all mean different things to different people even in the field of AI which also means something special and to quote OP here “Im so fucking sick of seeing that bullshit” 💀

      Stop bullshitting you know what people are talking about in a field as brand spanking new (read: underdeveloped) as this. But people get your frustration with this one thing even if they are totally wrong about it.

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      5
      arrow-down
      12
      ·
      4 days ago

      We know how each individual part work. That’s just basic math.

      We don’t know for sure how all trillion parts together produce the results they do. You can’t debug the model step by step to see how the prompt ”generate image of a penguin” produces an image of a penguin, and not an ice bear. That what people mean with ”we don’t know how AI works”.

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        4 days ago

        Okay, but who cares? “Complex systems are difficult to predict” is a mathematical insight that’s like 2 centuries old at this point… and it hasn’t hindered us at all from gaining deep insights into how both individual complex systems work and how complex systems as a general class of phenomena work. I can’t keep track of all the masses and velocities of every individual air molecule in the room I’m sitting in, but I still know how the interactions of those particles give rise to the temperature and air pressure and general behavior of the atmosphere in the room.

        People know how this shit works, and anyone telling you otherwise is either willfully ignorant or internationally lying to you to feed a hype cycle with an end goal of making your life worse. People can’t afford to remain uneducated about this stuff anymore.

        • magic_lobster_party@fedia.io
          link
          fedilink
          arrow-up
          1
          arrow-down
          8
          ·
          4 days ago

          What’s interesting is how these complex models produce anything useful at all. We could very well have complex models that don’t produce anything other than random noise.

          • Prunebutt@slrpnk.net
            link
            fedilink
            arrow-up
            5
            ·
            3 days ago

            The reason why “we” have these models because they were deliberately trained not to output random noise. That part is well understood.

            The only reason why we don’t know what exactly makes the model output an image of Garfield with boobs is the amount of data to sift through. Not because we don’t understand the processes.

            • magic_lobster_party@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              5
              ·
              3 days ago

              Generalization is not a given. It’s possible to make complex models that perfectly memorizes 100% of the training data, but produces garbage results if the input diverges ever so slightly from the training.

              This generalization is a process that’s not fully understood. Earlier architectures struggled with this level of generalization, but transformers seem to handle it well.

              • Prunebutt@slrpnk.net
                link
                fedilink
                arrow-up
                4
                arrow-down
                1
                ·
                3 days ago

                Not overfitting is hard, yes. But it’s not “we have no idea how/why this works”-hard.

        • magic_lobster_party@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          3 days ago

          Windows 11 is programmed by Microsoft engineers. I’m sure they have a good idea how it works. When you click a button, you get predictable results.

          Neural networks is a different story. It’s difficult to predict what’s going to happen for a given prompt, and how adjustments to the weights affects the results.

          There’s some article from last year where they found a ”golden gate” neuron in Claude. Changing it to be always on caused the model to always mention the golden gate in its responses. How and why this works is AFAIK not fully understood. For some reason the model managed to generalize the concept of golden gate into one single neuron.

          • Valmond@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            3 days ago

            What a cute thought!

            No one knows how “everything” works in old monolithic software. You just have to try and see what happens, and often you just doesn’t touch certain codebases because nobody really know the ramifications if you change something in them. Windiws 11 is probably way worse than any LLM. Try to share a simple folder on a simple home network and you’ll see some of the cruft.

            Source: have worked on 30-40 year old monolithic software. In not one of those projects were there a single “engineer” who knew it all.

            Neural networks has their fuzzy part of course, but software became not fully understandable a long time ago. IMO.

            • magic_lobster_party@fedia.io
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              3 days ago

              Of course, no single person fully understand the entirety of Windows. But I hope the people working with Windows understands at least a part of it.

              The thing with LLMs is that no one really understands the purpose of one single neuron, how it relates to all other neurons, and how they together seem to be able to generalize high level concepts like golden gate bridge. It’s just too much to map it out.

              • Valmond@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                2 days ago

                We do know how a single “neuron” relates to other neurons, it’s in the model. But what gets complicated is the vast amount of them, of course.

                So yes, we don’t intrinsically get to understand it all, but I think we can understand what it does, a bit like windows 😁/j.

                Fascinating subject, and we’re just scratching the beginning IMO.