“I’ve been saving for months to get the Corsair Dominator 64GB CL30 kit,” one beleagured PC builder wrote on Reddit. “It was about $280 when I looked,” said u/RaidriarT, “Fast forward today on PCPartPicker, they want $547 for the same kit? A nearly 100% increase in a couple months?”



Plenty of folks do AMD. A popular homelabsetup is 32GB AMD MI50 GPUs, which are quite cheap on eBay. Even Intel is fine these days!
But what’s your setup, precisely? CPU, RAM, and GPU.
Looks like I’m running an AMD Ryzen 5 2600 CPU, AMD Radeon RX 570 GPU, and 32GB RAM
Mmmmm… I would wait a few days, and try a GGUF quantization of Kimi Linear once its better supported: https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct
Otherwise you can mess with Qwen 3 VL now, in the native llama.cpp UI. But be aware that Qwen is pretty sycophantic like ChatGPT: https://huggingface.co/unsloth/Qwen3-VL-30B-A3B-Instruct-GGUF/blob/main/Qwen3-VL-30B-A3B-Instruct-UD-Q4_K_XL.gguf
If you’re interested, I can work out an optimal launch command. But to be blunt, with that setup, you’re kinda better off using free LLM APIs with a local chat UI.
Thanks for the info. I would like to run locally if possible, but I’m not opposed to using API and just limiting what I surface.
I have a MI50/7900xtx gaming/ai setup at homr which in i use for learning and to test out different models. Happy to answer questions