• douglasg14b@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    I mean sure, yeah, it’s not real now.

    Does that mean it will never be real? No, absolutely not. It’s not theoretically impossible. It’s quite practically possible, and we inch that way slowly, but by bit, every year.

    It’s like saying self-driving cars are impossible in the '90s. They aren’t impossible. You just don’t have a solution for them now, but there’s nothing about them that makes it impossible, just our current technology. And then look it today, we have actual limited self-driving capabilities, and completely autonomous driverless vehicles in certain geographies.

    It’s definitely going to happen. It’s just not happening right now.

    • AnyOldName3@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      AGI being possible (potentially even inevitable) doesn’t mean that AGI based on LLMs is possible, and it’s LLMs that investors have bet on. It’s been pretty obvious for a while that certain problems that LLMs have aren’t getting better as models get larger, so there are no grounds to expect that just making models larger is the answer to AGI. It’s pretty reasonable to extrapolate that to say LLM-based AGI is impossible, and that’s what the article’s discussing.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        I very specifically did not mention LLMs, I even called out that our current technology is not there yet. And llms are current technology.

        The argument in thread was about AGI being impossible or possible. Not necessarily about the articles. Statement of llm-based agis not being possible, which is a pretty obvious and almost unnecessary statement.

        It’s like saying cars with tires and no airfoil surfaces aren’t going to fly. Yeah no shit.

        A fancy text prediction and marginal reasoning engine isn’t going to make AGI. By no means does that make AGI impossible though, since the concept of AGI is not tied to LLMs capabilities

        • AnyOldName3@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          You not mentioning LLMs doesn’t mean the post you were replying to wasn’t talking about LLM-based AGI. If someone responds to an article about the obvious improbability of LLM-based AGI with a comment about the obviously make-believe genie, the only obviously make-believe genie they could be referring to is the one from the article. If they’re referring to something outside the article, there’s nothing more to suggest it’s non-LLM-based AGI than there is Robin Williams’ character from Aladdin.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        This is the kind of intelligent conversation I left Reddit for lack of. Happy to see that Lemmy is picking up the slack.