• jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      13
      ·
      4 days ago

      This is why LLMs should only be employed in cases where a 60% error rate is acceptable. In other words, almost none of the places where people are currently being hyped to use them.

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      17
      ·
      4 days ago

      This is an excellent exploit of the human mind. AI being convincing and correct are two very different ideals.

  • incogtino@lemmy.zip
    link
    fedilink
    English
    arrow-up
    49
    ·
    4 days ago

    The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

  • moopet@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    To be fair, this is how most things work. It’s amazing how many science stories get published in popular titles like “New Scientist” and sound believable, yet every time one’s appeared that’s on a subject I know well, it’s been terribly misrepresentative…

    • biofaust@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      4 days ago

      I think it has gotten worse there since we started calling “article spewing” journalism. Yet, there are still more guardrails in that domain.

  • Cruxifux@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    Something trained on information on the internet, which is 90 percent bullshit, being right like 40 percent of the time is still pretty wild. But it’s also like… we can’t be using this for knowledge.