u/Euphoric-Doughnut538

Custom jcode + Swarm rust native intelligent operation system much more..

Custom jcode + Swarm rust native intelligent operation system much more..

I built out a custom version of JCode with full swarm integration, back-end intelligence, GitHub support, Updated side panel, and much more. Language model Agnostic. I will probably be releasing this In the coming weeks If not days. I was able to run 60 agents fully active while compiling rust Crates. On a Macbook M5 with 24 gigs of RAM. I have since introduced a CPU/GPU/RAM/Network Throttle capability to the SWARM that checks CPU/GPU network load prior to launching additional agents and coordinators. Using Claude code as the orchestrating agents and deep seek as the coding agents due to the bandwidth throughput and tokens per second. I have since made it dynamically select models based on their capabilities and token usage and general costs so that there is a LLM router That routes the workload to the cheapest and most efficient model where appropriate. Additionally, I Embedded graphify rust into the system as tooling along with a few other bells and whistles.Such as BM25X Recoded in rust, Sigmap, recoded in Rust, MTP LX, re-encoded in rust with an embedding model right around 200 megabytes. and a quincoder model for accelerated coding with MTP LX Draft model on the front end which is Gemma 4 Paired with QuinCoder 3.6 fine-tuned and a primary model such as Codex or Claude for orchestration. I'm currently running One draft model, one accelerated QUIN 3.6 coding model with MTPLX. A memory model, which is... Codex Based. running mlx For the Draft and coding model. About 70 to 80 tokens per second when using the local model for coding. This is a highly complex orchestrated system that is highly integrated. I have burned about 500 million tokens. Getting to this point.

https://preview.redd.it/9tlqpoix600h1.png?width=3024&format=png&auto=webp&s=ccc225cfe6d8fcbe6c0f38f3d88ab93178aa18c9

https://preview.redd.it/m5vw0p9j600h1.png?width=3024&format=png&auto=webp&s=e4028441edc3c473026a6e4847765ff5fdc14282

https://preview.redd.it/q23v888e600h1.png?width=3024&format=png&auto=webp&s=e8f7bfd1528b893f4bca95b07637192aed165d12

reddit.com
u/Euphoric-Doughnut538 — 6 days ago
▲ 5 r/claude

Guys this is just a reminder that if you think you have a novel idea search github before you do anything It's always better to download an open source repo then reinvent the wheel burning your tokens. Frankly, at this point, MIT licensing or any other form of licensing is absolutely ridiculous due to the fact that code is so Easily reproduced and engineered. Slow down your token burn and perform some research before you do anything. It'll save you a lot of tokens and time. Even if you turn around and ask Claude Code or Codex to examine a repo for a blueprint or a spec sheet. You will save yourself some time and tokens.GitHub is your friend.

Frankly comparing Claude code to Codex the token burn is significant with Claude code with their dynamic I mean arbitrage pricing

Thank me later.

reddit.com
u/Euphoric-Doughnut538 — 15 days ago