▲ 1 r/LocalLLaMA
Best local model supporting claude code? Rtx3060
Hello all, I’ve been using Qwen 3.5 9B Q4 262k ctx using Llama cpp for claude code for a while now, is there any model which better complements agentic coding setup locally? Or is there a better harness (than Claude Code)?
System RAM: 16GB
VRAM: 12GB
u/CatSweaty4883 — 2 hours ago