• Thorry@feddit.org
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    2 days ago

    Good video, one bit of criticism tho.

    They state that AI summarizes websites instead of sending those website traffic, which is true. This is obviously a bad thing, since those websites can’t exist without that traffic (on top of being bombarded with requests from bots collecting data for AI training). They also state AI plagiarizes without giving credit, also a true and bad thing. But then on the part where they explain how they are going to use AI, they say they will use it to write little scripts for their animations and such. And as a quick Google alternative.

    Have to call out the hypocrisy here. Those things you said were bad, that contribute to the end of the web and the end of your channel, you are going to simply use? OK it’s a good thing you aren’t going to use the AI in the research and writing stage of the video, but elsewhere is just fine?

    • Chozo@fedia.io
      link
      fedilink
      arrow-up
      20
      arrow-down
      3
      ·
      2 days ago

      It’s not hypocrisy. “AI” isn’t just one single thing, and the way it’s used and implemented are more important than whether or not to use it at all. Any tool can be used for right or for wrong, it just depends on who is using it and what they’re doing with it.

      • Thorry@feddit.org
        link
        fedilink
        arrow-up
        9
        ·
        2 days ago

        The ways they say they are going to use AI is exactly what they said was causing harm. If that isn’t hypocrisy, what is?

        They call out the issues, only to completely ignore those issues in their own use.

        • riot@fedia.io
          link
          fedilink
          arrow-up
          6
          ·
          23 hours ago

          The ways they say they are going to use AI is exactly what they said was causing harm.

          I disagree. The examples of traffic not going to websites and the stealing of content to train models are two things among several that they state in the very beginning of the video, as an introduction to how AI slop is invading many different sectors of the internet.

          Starting at 01:43:

          While this is sad and frustrating, what’s even worse is that generative AI truly has the potential to break the internet irreversibly. By making it harder and harder to tell what is true.

          The beginning part stands out to me as things that they think are too bad, but not really what they consider the worst thing about AI. To me, their main concern is the fact that AI hallucinates and comes up with stuff, so as you say, they won’t use it for research and writing. But they will let their animators use AI programming tools to for example speed up writing expressions for use in After Effects.

          However, their added in line at the end of them using AI as a “faster google alternative” is very open-ended and gives me pause. I’m very curious what exactly they mean by that, because at first listen, it could sound like a slippery slope into not fact-checking things. So I checked out their sources link that they always have in the video description, emphasis theirs:

          One key driver in the development of “AI Slop” is a lack of oversight. Whether intentionally (to save money, or to mislead) or unintentionally, if generative AI is put on a task and the results are not checked for quality and factuality, low-quality content is the typical result. But the good news is that we can oversee it, and check/change/edit the results before we share them with the world. And then the output quality can be much improved, turning an AI-slop generator into an amazing tool for humans.

          TL;DR - But this is still pretty long: What I take away from what they say in their video is that they think misinformation is what will contribute to the end of the web and the end of their channel - People using AI to pump out misleading and untrue content at a pace and scale that no human content creator or educator can outpace or even keep up with. Essentially, I believe that the way that they see it is that the tools are already out there, and won’t be going away, and so they are going to try and use it responsibly in order to help with mundane tasks. I don’t know if I would consider it ethical, but I disagree that ‘The ways they say they are going to use AI is exactly what they said was causing harm.’

      • Goretantath@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        Well then they shouldn’t use the buzzword and instead list out the exact name of the tool they will be using, instead of being lazy.

    • starchylemming@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      well they tell us what we know already but its not a bad video

      a little less plugging their calendar and it would be something to send to people who are way to enthusiastic with the slop maker