
Custom jcode + Swarm rust native intelligent operation system much more..
I built out a custom version of JCode with full swarm integration, back-end intelligence, GitHub support, Updated side panel, and much more. Language model Agnostic. I will probably be releasing this In the coming weeks If not days. I was able to run 60 agents fully active while compiling rust Crates. On a Macbook M5 with 24 gigs of RAM. I have since introduced a CPU/GPU/RAM/Network Throttle capability to the SWARM that checks CPU/GPU network load prior to launching additional agents and coordinators. Using Claude code as the orchestrating agents and deep seek as the coding agents due to the bandwidth throughput and tokens per second. I have since made it dynamically select models based on their capabilities and token usage and general costs so that there is a LLM router That routes the workload to the cheapest and most efficient model where appropriate. Additionally, I Embedded graphify rust into the system as tooling along with a few other bells and whistles.Such as BM25X Recoded in rust, Sigmap, recoded in Rust, MTP LX, re-encoded in rust with an embedding model right around 200 megabytes. and a quincoder model for accelerated coding with MTP LX Draft model on the front end which is Gemma 4 Paired with QuinCoder 3.6 fine-tuned and a primary model such as Codex or Claude for orchestration. I'm currently running One draft model, one accelerated QUIN 3.6 coding model with MTPLX. A memory model, which is... Codex Based. running mlx For the Draft and coding model. About 70 to 80 tokens per second when using the local model for coding. This is a highly complex orchestrated system that is highly integrated. I have burned about 500 million tokens. Getting to this point.