• skarn@discuss.tchncs.de
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    2 days ago

    It’s still leagues ahead of LLMs. I’m not saying it’s entirely impossible to build a computer that surpasses the human brain in actual thinking. But LLMs ain’t it.

    The feature set of the human brain is different, in a way that you can’t compensate for by just increasing scale. So you get something that works but not quite, by using several orders of magnitude more power.

    We optimize and learn constantly. We have chunking, whereby a complex idea becomes simpler for our brain once it’s been processed a few times, and this allows us to progressively work on more and more complex ideas without an increase in our working memory. And a lot of other stuff.

    If you spend enough time using LLMs you must notice how their working is different from your own.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      I think the moat is that when a human is born and their world model starts “training”, it’s already pre-trained by millions of years of evolution. Instead of starting from random weights like any artificial neural network, it starts with usable stuff, lessons from scenarios it may never encounter but will nevertheless gain wisdom from.

    • uncouple9831@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 days ago

      I don’t spend time working with LLMs. I’d agree we have additional features. For example I think while the computers currently can guess, we can guess and check in a meaningful way. But that’s not what the meme was about. I would argue the meme was barely about anything other than “ai bad, me smort”. Ironic since the LLM could probably make a better one even if it “doesn’t understand”, whatever understand is.