I have one server with a cheap MI50 instinct. Those come for really cheap on eBay. And it’s got really good memory bandwidth with HBM2. They worked ok with ollama until recently when they dropped support for some weird reason but a lot of other software still works fine. Also older models work fine on old ollama.
The other one runs an RTX 3060 12GB. I use this for models that only work on nvidia like whisper speech recognition.
I tend to use the same models for everything so I don’t have the delay of loading the model. Mainly uncensored ones so it doesn’t choke when someone says something slightly sexual. I’m in some very open communities so standard models are pretty useless with all their prudeness.
For frontend i use OpenWebUI and i also run stuff directly against the models like scripts.
Agreed. The way they just dumped support for my card in some update with some vague reason also irked me (we need a newer rocm they said but my card works fine with all current rocm versions)
Also the way they’re now trying to sell cloud AI means their original local service is in competition to the product they sell.
I’m looking to use something new but I don’t know what yet.
If you are running big MoE models that need some CPU offloading, check out ik_llama.cpp. It’s specifically optimized for MoE hybrid inference, but the caveat is that its vulkan backend isn’t well tested. They will fix issues if you find any, though: https://github.com/ikawrakow/ik_llama.cpp/
mlc-llm also has a Vulcan runtime, but it’s one of the more… exotic LLM backends out there. I’d try the other ones first.
Thank you so much!! I have been putting it off because what I have works but a time will soon come when I’ll want to test new models.
I’m looking for a server but not many parallel calls because I would like to use as much context as I can. When making space for e.g. 4 threads, the context is split and thus 4x as small. With llama 3.1 8b I managed to get 47104 context on the 16GB card (though actually using that much is pretty slow). That’s with KV quant to 8b too. But sometimes I just need that much.
I’ve never tried the llama.cpp directly, thanks for the tip!
Kobold sounds good too but I have some scripts talking to it directly. I’ll read up on that too see if it can do that. I don’t have time now but I’ll do it in the coming days. Thank you!
Vllm is a bit better with parallelization. All the kv cache sits in a single “pool”, and it uses as many slots as will fit. If it gets a bunch of short requests, it does many in parallel. If it gets a long context request, it kinda just does that one.
You still have to specify a maximum context though, and it is best to set that as low as possible.
…The catch is it’s quite vram inefficient. But it can split over multiple cards reasonably well, better than llama.cpp can, depending on your PCIe speeds.
You might try TabbyAPI exl2s as well. It’s very good with parallel calls, thoughts I’m not sure how well it supports MI50s.
Another thing to tweak is batch size. If you are actually making a bunch of 47K context calls, you can increase the prompt processing batch size a ton to load the MI50 better, and get it to process the prompt faster.
EDIT: Also, now that I think about it, I’m pretty sure ollama is really dumb with parallelization. Does it even support paged attention batching?
The llama.cpp server should be much better, eg use less VRAM for each of the “slots” it can utilize.
An RTX 3090 + a cheap HEDT/Server CPU is another popular homelab config. Newer models run reasonably quickly on them, with the attention/dense layers on the GPU and sparse parts on the CPU.
Could you elaborate a little on your setup? Sounds interesting
I have one server with a cheap MI50 instinct. Those come for really cheap on eBay. And it’s got really good memory bandwidth with HBM2. They worked ok with ollama until recently when they dropped support for some weird reason but a lot of other software still works fine. Also older models work fine on old ollama.
The other one runs an RTX 3060 12GB. I use this for models that only work on nvidia like whisper speech recognition.
I tend to use the same models for everything so I don’t have the delay of loading the model. Mainly uncensored ones so it doesn’t choke when someone says something slightly sexual. I’m in some very open communities so standard models are pretty useless with all their prudeness.
For frontend i use OpenWebUI and i also run stuff directly against the models like scripts.
This is the way.
…Except for ollama. It’s starting to enshittify and I would not recommend it.
Agreed. The way they just dumped support for my card in some update with some vague reason also irked me (we need a newer rocm they said but my card works fine with all current rocm versions)
Also the way they’re now trying to sell cloud AI means their original local service is in competition to the product they sell.
I’m looking to use something new but I don’t know what yet.
I’ll save you the searching!
For max speed when making parallel calls, vllm: https://hub.docker.com/r/btbtyler09/vllm-rocm-gcn5
Generally, the built in llama.cpp server is the best for GGUF models! It has a great built in web UI as well.
For a more one-click RP focused UI, and API server, kobold.cpp rocm is sublime: https://github.com/YellowRoseCx/koboldcpp-rocm/
If you are running big MoE models that need some CPU offloading, check out ik_llama.cpp. It’s specifically optimized for MoE hybrid inference, but the caveat is that its vulkan backend isn’t well tested. They will fix issues if you find any, though: https://github.com/ikawrakow/ik_llama.cpp/
mlc-llm also has a Vulcan runtime, but it’s one of the more… exotic LLM backends out there. I’d try the other ones first.
Thank you so much!! I have been putting it off because what I have works but a time will soon come when I’ll want to test new models.
I’m looking for a server but not many parallel calls because I would like to use as much context as I can. When making space for e.g. 4 threads, the context is split and thus 4x as small. With llama 3.1 8b I managed to get 47104 context on the 16GB card (though actually using that much is pretty slow). That’s with KV quant to 8b too. But sometimes I just need that much.
I’ve never tried the llama.cpp directly, thanks for the tip!
Kobold sounds good too but I have some scripts talking to it directly. I’ll read up on that too see if it can do that. I don’t have time now but I’ll do it in the coming days. Thank you!
Vllm is a bit better with parallelization. All the kv cache sits in a single “pool”, and it uses as many slots as will fit. If it gets a bunch of short requests, it does many in parallel. If it gets a long context request, it kinda just does that one.
You still have to specify a maximum context though, and it is best to set that as low as possible.
…The catch is it’s quite vram inefficient. But it can split over multiple cards reasonably well, better than llama.cpp can, depending on your PCIe speeds.
You might try TabbyAPI exl2s as well. It’s very good with parallel calls, thoughts I’m not sure how well it supports MI50s.
Another thing to tweak is batch size. If you are actually making a bunch of 47K context calls, you can increase the prompt processing batch size a ton to load the MI50 better, and get it to process the prompt faster.
EDIT: Also, now that I think about it, I’m pretty sure ollama is really dumb with parallelization. Does it even support paged attention batching?
The llama.cpp server should be much better, eg use less VRAM for each of the “slots” it can utilize.
Bloefz has a great setup. Used Mi50s are cheap.
An RTX 3090 + a cheap HEDT/Server CPU is another popular homelab config. Newer models run reasonably quickly on them, with the attention/dense layers on the GPU and sparse parts on the CPU.