• DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    I tried llamafile for text gen too but I couldn’t get ROCm to properly work with it to run it through my GPU without having to build it myself, which I’m really not into. And CPU text gen is waaaaaay too slow for anything. Mixtral response was like ~250 seconds or so for ~1k context tokens, I think Mistral was about 52 seconds or something around that number.

    https://github.com/Mozilla-Ocho/llamafile Mixtral is definitely beefy, Mistral is quite a bit faster and there’s a few even smaller prebuilt ones. But the smaller you go the less complex the responses will be. I think llamafile is a good step in the right direction though, but it’s still not a good out of the box experience yet. At least I got farther with it than with oobabooga, which is the recommendation for SillyTavern, which would just crash whenever it generated anything without even giving me an error.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        9 months ago

        Have you missed the first part where I explained that I couldn’t get it to run through my GPU? I would only have a 6650 XT anyway but even that would be significantly faster than my CPU. How far I can’t say exactly without experiencing it though, but I suspect with longer chats and consequently larger context sizes it would still be too slow to be really usable. Unless you’re okay waiting for ages for a response.

        • Flumpkin@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Sorry, I’m just curious in general how fast these local LLMs are. Maybe someone else can give some rough info.