• ssillyssadass@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    I feel like AI generated videos of very active things like protests would be easy to debunk. Just look at the people in the background and point out the people disappearing after moving behind another person, or look for changing faces.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      In the article they mention videos with literal AI watermarks being passed around as if they were real. The targets want to believe they are true and will ignore anyone debunking them.

    • ReallyActuallyFrankenstein@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      You haven’t seen a lot of Sora 2 videos. Identifiable traits require a pretty careful eye in many videos to spot.

      You absolutely can’t even rely on the watermark, since removing that watermark is trivial to the nation-states running disinfo campaigns, and even for end users removal is trivial compared to creating typical public AI video models.

      • Bongles@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Even that little blur effect people use to get rid of watermarks, sure it’s sora but it could as easily just be a tiktok username, which people remove all the time. So the people who want to believe it, will.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      Doesn’t matter how easy it is to debunk. I assume you were around for COVID and Trump’s first term?