u/willlamerton

▲ 45 r/nanocoder+2 crossposts

Nanocoder 1.26.1 is out - we added a lot 🔥

Hey everyone! Will here.

We've just released Nanocoder 1.26.0 and it's a big one - possibly our largest yet with not only many awesome new features but large reworks under the hood to make it even stronger in certain areas. It's also our most diverse release with over 10 contributors coming together to make it possible. Having so many people joining the collective and building truly open AI is beyond amazing and I can't thank people enough! 🔥

Anyway, within Nanocoder, here is what we have added:

Nano mode is the big one for this release. If you have been running Nanocoder with a small open-weights model on modest hardware, you know the system prompt overhead can eat a meaningful chunk of your context window before the model says anything useful. Nano mode drops that overhead from roughly 500-700 tokens down to 150-250 tokens. It is a third profile in /tune, alongside the existing full and minimal profiles. It disables find_files, list_directory, and agent; cuts the section lengths down; and ships with a low-end hardware preset.

Reasoning traces are new. Models that emit reasoning content, such as Codex GPT-5, DeepSeek-R1-style, or Anthropic extended thinking, now have that content stream in real time as a collapsible Thought block above the response. It persists in history and appears in logs. Toggle it with Control+R. The Display Settings panel under /settings controls the default expansion state.

Non-interactive mode now has a --plain flag. This strips the Ink rendering layer entirely so output is clean for CI pipelines, scripts, and pipes. Exit codes are deterministic, stdin/stdout are handled properly, and there are no interactive prompts.

We also reworked the VS Code extension. The old "Ask Nanocoder" command is gone, replaced with a more natural context-on-focus flow. There is a /rename command for chat sessions, a defaultMode config option, custom system prompt support, per-model context window overrides, a disabledTools option, JSON tool fallback for open-weights models (Qwen, Kimi, GLM), <function=...> format support, and a new Display Settings panel. Plus 12+ new themes.

Full changelog on GitHub: https://github.com/Nano-Collective/nanocoder

---

Within the collective we're also gearing up for more growth, building a mission behind truly open AI that is built by the community for the community is an imperative one and we're putting a lot of groundwork into growing an organisation for everyone that serves this.

We've recently finished our collective docs which share a little more behind the brand: https://docs.nanocollective.org/collective

If you want to get involved check out our GitHub:

https://github.com/Nano-Collective

And join our Discord:

https://discord.gg/ktPDV6rekE

u/willlamerton — 2 days ago
▲ 5 r/nanocoder+1 crossposts

ContentForest: Multi-agent Workflow To Generate Release Content

https://reddit.com/link/1t7bpux/video/svdabaj0pxzg1/player

TL;DR: Multi-agent pipelines need measuring sticks to be effective, not just a model and a prompt. ContentForest took time to build its measuring sticks (brand guidelines, tone-of-voice docs, an llms.txt used as a grounded source of truth), and that foundation is what makes the pipeline genuinely autonomous rather than just generative. We're now extracting the engine as a configurable npm package so any repo can plug in its own measuring sticks.

Multi-agent workflows get a lot of attention. Fewer people talk about what makes one actually work in practice rather than produce plausible-sounding output.

A bit of context first: we're the Nano Collective, a small group building open-source AI tooling for the community, not for profit. ContentForest is one of those tools - though internal at the moment. It sits next to nanocoder, our general-purpose coding agent. ContentForest is a specialised release-content workflow that runs on top of it.

The problem we were solving: every Nano Collective product had its own GitHub Action for release content. Each one ran Claude on a manual trigger, used a Claude Code Action to draft posts, and dropped the output into the repo for someone to copy out. It worked, but it was per-repo, manually fired, and the same prompt produced visibly different content from one run to the next. Voice drifted across products and across runs of the same product. We wanted one pipeline, one set of rules, one consistent voice across every release we ship.

ContentForest is what replaced that setup. Same intent (automated multi-channel release content), but rebuilt around explicit measuring sticks: Minimax as the LLM, Nanocoder as the execution harness, brand rules and length rules baked into config the agent reads at runtime. It still didn't stay consistent at first. The model would generate well in one run and miss the mark in the next, even with the same prompt.

The gap was the measuring sticks!

What "measuring sticks" actually means here

We had to write it down before the pipeline could enforce it. Brand voice, tone of voice, the specific terms to avoid, the channel length rules. We documented all of that before ContentForest could apply it reliably.

The brand guidelines define the voice as operational, understated, and honest, closer to engineering docs than marketing copy. They also list a small set of phrases that should never appear, regardless of how persuasive they sound in a first draft. That's not style preference; that's a content filter built from explicit documentation.

The llms.txt on our website acts as a persistent, markdown-shaped source of truth the model can reference. Brand voice, governance structure, project conventions, all in one file, versioned in the same repo as everything else. When the model needs to ground a claim about how the collective works, it has a canonical place to look rather than inventing from context.

Self-validation as a structural part of the run

The agent doesn't just generate and hand off. It runs programmatic checks (required links, string length per channel, word count bounds) before considering its work done. If a check fails, the Nanocoder harness retries the run with a fresh context. Retry budget is per-agent, not global: each stage has its own shot count.

This is the part that makes the pipeline autonomous rather than just automated. The model evaluates whether its output meets the spec, not just whether it produced text. The measuring sticks are in the validation layer, not only in the prompt.

Two agents with clear boundaries

The earlier draft used four agents: personal-account variants per team member. The problems were immediate: context fragmentation, token waste, and voice drift across a single run. The simplification wasn't a concession. Two agents with their own retry budgets are easier to reason about than four with shared context and no isolation. Announcement-layer agent first, depth-layer agent second (produces 0–3 articles only when a feature has enough depth to justify it). Draft, validate, ship.

The human gate

The AI generates the PR and the markdown files. A human reviews and merges. The PR review is the approval step, built into the existing workflow. This matters on Reddit where "AI spam" is a legitimate objection: the content is AI-generated, but a person signed off on it. The measuring sticks reduce noise; the human gate prevents the rest.

Making the engine portable

The thing that's specific to us is the content of the measuring sticks: our brand voice, our channels, our forbidden phrases. The engine that consumes those measuring sticks isn't specific to us at all.

So we're pulling the engine out into its own package: @nanocollective/contentforest-core. One config file (contentforest.config.json) points at your brand docs, your channel definitions, your validators. Drop it into any repo, run contentforest generate --product foo --version 0.1.0, get a brand-consistent content pack as a PR. Bring your own coding-agent runtime: nanocoder by default, with adapters planned for claude-code, codex etc.

The split is deliberate: the engine ships brand-neutral and reads voice from config; what you see us publishing here is one specific deployment of that engine, with our config, our prompts, our viewer. If the argument in this post lands for you, the test is whether you can describe your own measuring sticks well enough that a config file can encode them. If you can, the pipeline does the rest.

Testing this live

We're running ContentForest on our own repos right now. The /releases folder in any of our repos shows the raw markdown output from the agents. You can see the measuring sticks in practice.

The Nano Collective builds open-source AI tooling not for profit, but for the community. If any of this resonates (the layered approach, the OSS angle, the engine-plus-config split), come find us at https://nanocollective.org.

reddit.com
u/willlamerton — 6 days ago

Hey everyone,

We’ve just shipped Nanocoder 1.25.0, a major release focused on making AI collaboration feel faster and more useful in real development workflows. One of the biggest additions is subagents: Nanocoder can now delegate complex work into isolated child conversations, making it much better at tackling larger tasks without losing the thread.

Highlights

Yolo mode
For the moments when confirmation prompts just slow things down, Yolo mode auto-accepts every tool without exception. Unlike auto-accept mode, that includes things like bash execution and other potentially destructive operations, so it’s powerful but very much use-with-care. You can switch between normal -> auto-accept -> yolo -> plan with Shift+Tab, and the status bar turns red when yolo is active.

Subagents + smarter orchestration
This is one of the coolest parts of the release. Nanocoder can now spawn isolated child conversations to handle specific work in parallel. We ship with two built-in agents - Explore for read-only codebase investigation and Reviewer for actionable code review - and each has its own tool set tailored to its job. Their progress renders live in-place as they work, and the system is flexible too: you can define your own custom subagents with markdown files + YAML frontmatter in .nanocoder/agents/ and manage everything through /agents. In practice, that means better delegation, cleaner context management, and a lot more room to grow this part of the ecosystem.

Prompt, tuning, and plan mode improvements
We redesigned the system prompt into modular sections that are assembled dynamically based on mode. We also added /tune for per-session control, including full vs minimal tool profiles, forcing XML fallback by disabling native tools, and aggressive compact mode for smaller models. On top of that, plan mode now properly enforces read-only tools at the policy level, blocking mutation tools while keeping exploration available.

Provider and config improvements
This release also includes provider/config updates, including support for ChatGPT Codex with OAuth device flow.

Under the hood
We fixed issues including alwaysAllow not being respected, a dim color accessibility issue, and a scheduler mode memory leak, and added debug logging to 15 previously silent error catches in git utilities.

Big thank you to everyone contributing
Every release is the result of work from the community, and we’re really grateful for everyone building, testing, reporting issues, sharing feedback, and helping shape the project. We’re now nearing 2K GitHub stars and 10,000 downloads/month on Nanocoder alone. Across all Nano Collective software, we’re getting close to 20,000 downloads per month.

Nanocoder 1.25.0 is available now: https://github.com/Nano-Collective/nanocoder

Happy coding! 🚀

u/willlamerton — 27 days ago

Hey everyone!

We just shipped Nanocoder 1.24.0 with some awesome and long requested features.

The a big thing we've finally rolled out is parallel tool execution - instead of waiting for the model to run tools one at a time, independent tool calls now execute simultaneously. For workflows involving multiple file reads, bash commands, or searches, this noticeably speeds things up.

We also added some quality-of-life improvements:

  • The long awaited /resume command to restore previous chat sessions (they auto-save by project directory)
  • CLI flags for CI/CD scripts (--provider and --model skip the setup wizard)
  • NANOCODER_PROVIDERS env variable for containerized deployments
  • GitHub Copilot and MLX Server templates for broader provider support

On the technical side, we cleaned up config loading, simplified the tool parsing system, and fixed some annoying bugs around MCP configuration and provider timeouts.

We're also actively working on a our own VS Code fork as well as an improved model framework. One of the big things we're adding is different sub-agents. This will allow you to configure smaller, local models for delegated tasks saving context and making your work more private and provider agnostic. This will hopefully come to the next update!

Last but not least, we've released our new documentation site. This has been long needed and a big push by the core team to bring out. Check them out here: https://docs.nanocollective.org/

Thanks as always for being part of the community. Nanocoder has been growing a lot this past week! We're stoked for what's next.

If you want to get involved, we're community organization building AI tooling for everyone.

Discord: https://discord.gg/ktPDV6rekE

GitHub: https://github.com/Nano-Collective/nanocoder

u/willlamerton — 2 months ago