• Dangerhart@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    23 hours ago

    It seems like you are implying that models will follow Moore’s law, but as someone working on “agents” I don’t see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get “ai” but it isn’t going to be based on llms

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      20 hours ago

      You can’t predict how the next twenty years of research improves on the current techniques because we haven’t done the research.

      Is it going to be specialized agents? Because you don’t need a lot of data to do one task well. Or maybe it’s a lot of data but you keep getting more of it (robot movement? stock market data?)

      • Dangerhart@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 minutes ago

        We do already know about model collapse though, genai is essentially eating its own training data. And we do know that you need a TON of data to do even one thing well. Even then it only does well on things strongly matching training data.

        Most people throwing around the word agents have no idea what they mean vs what the people building and promoting them mean. Agents have been around for decades, but what most are building is just using genai for natural language processing to call scripted python flows. The only way to make them look coherent reliably is to remove as much responsibility from the llm as possible. Multi agent systems are just compounding the errors. The current best practice for building agents is “don’t use a llm, if you do don’t build multiple”. We will never get beyond the current techniques essentially being seeded random generators, because that’s what they are intended to be.