I’d like to be able to get a rating of whether an article I’m reading is likely to be LLM generated or not, as a measure of how much I should trust it. Ideally I’d like this in my browser as an extension alongside UBlock and consent-o-matic.

Does anyone know if such a thing exists? I found Winston from a quick review of the extension store but it is a paid extension and I’d rather something free.

  • Sylra@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    Stick to a small circle of trusted people and websites. Skip mainstream news. Small blogs, niche forums, and tiny YouTube channels are often more honest.

    Avoid Google for discovery. It’s not great anymore. Use DuckDuckGo, Qwant, or Yandex instead. For deeper but less precise results, try Mojeek or Marginalia. Google works okay only if you’re searching within one site, like site:reddit.com.

    Sometimes, searching in other languages helps find hidden gems with less junk. Use a translator if needed.

    • lichtmetzger@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 days ago

      Avoid Google for discovery. It’s not great anymore. Use DuckDuckGo, Qwant, or Yandex instead. For deeper but less precise results, try Mojeek or Marginalia.

      I went with Kagi. It costs some money, but they have a special deal with Google to get access to their API. It’s basically their search results without the AI slop, ads and BS. It works just like Google worked back in the good, old days.

      Let’s see how long this will last. But I will enjoy it as long as I can.

      Kagi also has a news section. I’m not entirely sure which sites they pull from, but on a first glance it looks more clean and less sloppy than the ones from the big players.

      Edit: They explain how they aggregate those news here.

  • LesserAbe@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    5 days ago

    From what little I’ve read there are many organizations claiming they can detect AI written content (I don’t know about plugins) but there’s little evidence that they’re accurately able to do so.

    • Sylra@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Tools like Turnitin or GPTzero don’t work well enough to trust. The real issue isn’t just detecting AI writing. It’s doing it without falsely accusing students. Even a 0.5% false positive rate is too high when someone’s academic future is on the line. I’m more concerned about wrongly flagging human-written work than missing AI use. These tools can’t explain why they suspect AI. At best, they only catch obvious cases. Ones you’d likely notice yourself anyway.

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      The LLM grifters have, indeed, spawned a spin-off community of grifters targeting the anti-LLM community.

      It’s grifters all the way down.

  • lechekaflan@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 days ago

    Sora-generated videos are now disturbingly close to realistic given the high framerate equivalent to a smartphone camera. Which would make automated detection difficult.

  • The Infinite Nematode@feddit.ukOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    Thanks everyone for your replies. I guess the days of believing online reviews are at an end then. I wonder what emerges from this. Presumably some kind of mutually assured trust scores to direct our searches to places we believe. I think Klout tried to do something like this a few years ago.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    5 days ago

    There is no such thing as “AI” detection because “AI” doesn’t exist.

    If you’re talking about avoiding generated content, then I don’t think that’s realistic either. Any “tests” are bound to become less and less accurate, as generated content becomes better and intentionally harder to detect.