u/Adorable_Weakness_39

I got tired of Claude Code bills so I built the Local version that actually works (open-source)
▲ 3 r/buildinpublic+1 crossposts

I got tired of Claude Code bills so I built the Local version that actually works (open-source)

It auto-configures llama.cpp for whatever hardware you have, gives you the exact Claude-Code TUI workflow, persistent memory, and approval-gated shell.

Proof it’s good: two blind tests I just dropped (Qwen3.6-27B local vs Claude Opus 4.7). Most people can’t tell which is which.

Links:
• GitHub: https://github.com/L-Forster/open-jet
• Blind tests:

https://x.com/FlouisLF/status/2050382582764679292

https://x.com/FlouisLF/status/2050911177408987481
• Discord for hardware presets: https://discord.gg/pspKHtExSa

Currently seeking vibe coders — first 10 to share what they make with it on the Discord gets featured permanently on the repo Readme!

u/Adorable_Weakness_39 — 4 days ago

Built an all-in-one Coding Agent for Local LLMs

There's been huge interest in local LLMs recently with the leap in their capabilities and intelligence with Qwen 27B being not far behind the best models from last year (see the image) whilst able to run on consumer hardware.

https://preview.redd.it/11xhf30sjb0h1.png?width=1112&format=png&auto=webp&s=2375f308299ec9dfaf1dd16830af971a6d10b413

That led me to find that there's a real problem with people setting up their local LLMs and performance is being left on the table by bad default settings. The default Ollama config gave my 18 tok/s on the same hardware I got 70 tokens/s. Also, models change every month, and unless you're keeping track of every new model and inference optimisation, you get left behind.

So I built OpenJet to combine the inference backend with the frontend coding agent harness like Claude Code to build a local-first coding harness. This means the backend config is managed automatically according to your hardware, and the agent harness is designed specifically for being on your machine - no Cloud API calls or expensive plans to manage.

https://preview.redd.it/wr54dlgtkb0h1.png?width=961&format=png&auto=webp&s=bc904c4ddbebe01546b236ceeededb14e6f67c63

I've tested it on my RTX 3090 and got 70 tok/s for Qwen3.6-27B.

If you want to give it a go or join the Discord community, or just have a look, here's the link:

https://openjet.dev/

I hope to see what you build.

reddit.com
u/Adorable_Weakness_39 — 4 days ago