I wouldn’t say it’s easy to get started…
You have to know about open source AI models, then you have to know what fine tuning is, then you have to know where to go to get software that runs the models, and then finally you need to know what models are compatible with both AMD and Nvidia graphics cards.


Actually not 100% true, you can offload a portion of the model into ram to save VRAM to save money on a crazy gpu and still run a decent model, it just takes a bit longer. I personally can wait a minute for a detailed answer instead of needing it in 5 seconds but of course YMMV