Local LLM Model that actually produces quality code.
I am still looking for something that can actually work with code bases. i.e. Not just single file apps, not just single file bash scripts. But something where I can give it access to my codebase, give it a spec for a new feature, hit a button, then 2 hours later get a working feature with little or no bugs.
Does that exist yet? Money is no objects at the moment, I am purely looking for something that actually works (and is local) at the moment.
I have the money, I just need to know it works before I shell out the dollars for it.
I've tried Qwen 3.6 27b on a 32GB RTX 4500 PRO on a remote pod, but the pod keeps going down..
If anyone knows of a reliable one I can test on?
- - - - - - -
EDIT 1: Budget <= $100k.
EDIT 2 @ 9:25pm EST time
I finally was able to get a rented one working with a RTX 5090 32GB + Qwen 3.6 27b.
While its certainly VERY helpful, its no SWE replacement (by a long shot). However I am easily 3-10x faster for coding tasks. So its well worth purchasing the card for my self to use it seems. Obviously I won't be using it 24/7 so I might rent out the compute to others when I am not using it or something. Anyone know a place in Toronto I get buy one these things on the cheap?