▲ 2 r/LocalLLM
Hi everyone,
I am looking for the best way to continue learning about local LLMs and to have the ability to load models like Gemma 4 easily on a graphics card. My goal is to experiment with an MCP server and different local models.
What do you think about the SAPPHIRE AMD Radeon Pro W7800 Solo 48 GB? Does anyone use it? Is ROCm a problem?
I’ve already used ROCm to load tiny models on my iGPU (integrated graphics) and it was okay (but slow, which is normal).
I am very grateful for all your advice; I am just starting out in this new passion
u/Flimsy_Offer466 — 12 days ago