• waigl@lemmy.world
    link
    fedilink
    arrow-up
    96
    arrow-down
    1
    ·
    19 hours ago

    It’s even worse: It answers the questions correctly enough that most people cannot tell the difference – but still not reliably correctly. Meaning you get answer that sound very convincing, but could easily still be dead wrong.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      6 hours ago

      It works by predicting the most likely words to follow the sequence you already have, with a bit of noise added. The result is that if you ask it a question, it is effectively designed to sound as much like an answer as possible. Whether or not that answer is true is out of scope, and not something that technology could ever consider.

      I like to talk about it as if it’s the world’s best prop master. You ask for a prop, and you’ll be given one of the most realistic props imaginable. You want a medical chart, it will give you a chart that might fool a doctor. You ask for a legal brief, it has one that might just fool a judge in court. If you ask it for a computer program, what it spits out might actually compile and/or run. But, of course, these are props. They’re only designed to look good on camera. At most, someone will stream it in 4k, pause it, and try to read the prop while it’s on screen.

      As someone who has been annoyed with props for decades, I love that. No more “computer code” scenes where it’s just random gobbledygook. But, someone trying to use the output as if it’s real is just as clever as someone who tries to spend prop money from a movie.

    • WalrusDragonOnABike [they/them]@reddthat.com
      link
      fedilink
      arrow-up
      28
      arrow-down
      1
      ·
      19 hours ago

      Except when you ask it for the meaning of an acronym and they say something with totally different letters. Yet people treat it as a source on something they know so little about that they cannot possibly tell it its just spitting out nonsense.

      • Ech@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        its just spitting out nonsense

        That’s exactly it. LLMs and their image counterparts have no innate or burgeoning knowledge as people tend to assume. Their singular, core function is to generate their output from literal random noise, like the static you used to see on TV. So the response to the same question will change because the random noise changed, not because the algorithm learned or reconsidered anything. And if you used the same noise, the answer would be identical. No knowledgeable or self-sufficient AI will ever evolve from that.

      • Diurnambule@jlai.lu
        link
        fedilink
        arrow-up
        2
        ·
        7 hours ago

        It allow to pass from zero knowledge on a subject to 0.1 knoweldge which make people with a little brain to win z little time at the start of a projet. It is still a ecological disaster though. The people comparing it to the ring is the closest I think.

      • trolololol@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        17 hours ago

        Just like your regular uncle. Or ultra right podcaster.

        It’s no surprise llms behave like the most vocal and dubiously confident people in the world.