Codex tokens are being nerfed next month. What local model should I pair Codex with for menial tasks like GitHub stuff and small code edits? I have a 5090, 64gb ddr5, 9950x3d. Even worth running local models with this hardware? Any really worth using that isn’t a gimmick?
Codex is nerfing tokens next month and I was hoping to use a local model to take up some of the more menial and simple tasks and letting codex do the heavy planning and large data base work.
I asked Chat and it said there’s really not much going on that can cleanly integrate. Anyone say otherwise?