• jaykrown@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    18 hours ago

    Happy Linux user over here. Free open source AI models are becoming much more powerful, and things like “Apple Intelligence” and “Co-Pilot” will be looked back on like Netscape.

    • Banana@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      17 hours ago

      Getting a free older computer from my work soon because it’s too old to “upgrade” to Windows 11 so I’ll be turning it into a Linux machine. Pretty dang psyched mostly for all the free software!

      • jaykrown@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        6 hours ago

        Awesome, never get discouraged, there is much you can learn and do by switching to Linux. I personally use Linux Mint for everything, and I’ve never had any major issues. A lot of things are almost exactly the same as on Windows.

        • Banana@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          6 hours ago

          I’ve heard of this distro and my brother was telling me about a way to try more distros without having to partition? (Idk all of this yet, still have a lot of work to do)

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      16 hours ago

      Gotta be honest, though, a locally hosted 70B model with basic RAG functionality isn’t exactly playing in the same league as the market leaders, which can be bigger by two to three orders of magnitude. And a model that size is already around the limit of what a beefy gaming PC can do with reasonable performance. We’re unlikely to ever beat the big players on quality with local models.

      What might happen is that the market collapses, the big players all go bankrupt, further LLM development ceases, and locally hosted Qwen3-80B will be the pinnacle of available text generation for the next thirty years.

      • jaykrown@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        I wrote “they are becoming much more powerful”. The point isn’t to beat big players, the point is that we will be able to run models that are just capable of what need, not the super smartest models available. Your last sentence I agree with, that may very well be what happens, except by then we’ll have Qwen4 and it’ll be even more efficient and more powerful.

        • Jesus_666@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 hours ago

          I predict incremental quality increases. Qwen4 will probably be a somewhat better Qwen3 (and a dud if we’re unlucky). I do agree that it’ll probably come out; there’s not enough life left in this AI boom for a Qwen5, though.

          The biggest change will probably come from figuring out where LLM use will actually benefit us. Right now the industry zeros to answer that with “everywhere” and concludes that it’s prudent to spend money equivalent to the GDP of an industrial nation on compute-only data centers.

          For example, I expect the use case for coding to be more like “autocomplete a code block based on known patterns” rather than “build a public-facing web application from a prompt”.