• buddascrayon@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    This is actually the reason why it will never actually become general AI. Because they’re not training it with logic they’re training it with gobbledy goop from the internet.

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      It can’t understand logic anyway. It can only regurgitate its training material. No amount of training will make an LLM sapient.

        • edible_funk@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Math, physics, the fundamental programming limitations of LLMs in general. If we’re ever gonna actually develop an AGI, it’ll come about along a completely different pathway than LLMs and algorithmic generative “AI”.

        • kkj@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Based on what LLMs are. They predict token (usually word) probability. They can’t think, they can’t understand, they can’t question things. If you ask one for a seahorse emoji, it has a seizure instead of just telling you that no such emoji exists.