u/Bobby_Gray

Turned Claude's rough week into an excuse to build an OpenCode-compatible version of my D&D skill
🔥 Hot ▲ 52 r/GPT3+3 crossposts

Turned Claude's rough week into an excuse to build an OpenCode-compatible version of my D&D skill

Claude has had a rough week. Between the outage and the usage limit threads, I figured it was actually good timing to do something I had been meaning to try anyway: take the D&D skill I built a few weeks ago and see if I could migrate it to run on OpenCode with free or local models. If Claude is your DM and Claude goes down mid-session, that is a problem worth solving.

The short version: it works, and it was easier to set up than I expected.

What I built

open-tabletop-gm is a fork of the original claude-dnd-skill, rebuilt to run on any LLM through OpenCode. OpenCode supports Anthropic, OpenAI, Google, Ollama, LM Studio, and any OpenAI-compatible endpoint, so you can point it at whatever is available. Free tier models, local models, a different provider entirely.

The Claude-specific parts (model routing between Haiku/Sonnet/Opus, the ~/.claude/ path structure, autorun) have been replaced with portable equivalents. The campaign files, display companion, and Python toolchain are all identical.

While I was at it, I also pulled D&D 5e out of the core and turned it into a system module. The GM core (pacing, NPC craft, improvisation, consequences) lives in one file and knows nothing about any specific game. D&D 5e lives in a separate systems/dnd5e/ folder. If you want to run Vampire: The Masquerade, Cyberpunk RED, Pathfinder, or any other TTRPG, you write a system.md describing your game's dice resolution, stats, health model, and conditions - and the same GM core runs it. There is a porting guide covering what transfers directly from the D&D implementation vs what needs configuring per game. D&D 5e is the reference implementation and ships fully built out. Everything else is a system.md away.

Why smaller/free models hold up better than you might expect

The Python toolchain carries a lot of the weight that would otherwise fall on the model:

  • Dice rolls, HP math, damage tracking: Python
  • Initiative and turn order: Python, tracked in a live sidebar
  • Timed effects and conditions: Python, file-persisted
  • SRD data lookup (spells, monsters, items): local JSON

The model's job is narration and judgment. It reads the campaign state from plain Markdown files and narrates from there. It does not do arithmetic and does not need to hold mechanical state in memory. That separation is what makes free and smaller models viable: the parts that tend to break on constrained models have been moved out of the model entirely.

First test: MiniMax M2.5 via OpenCode

Tested against the original claude-dnd-skill version. Setup was surprisingly frictionless -- OpenCode picked up the skill file without extra configuration. The model produced creative NPC responses and correctly read deceptive intent in a player message. More than I expected from a first pass on a free tier model.

Current testing: Qwen3-32B via LM Studio

Working well on the portable version so far. Script calls reliable, narration solid, campaign state persisting correctly across sessions. Testing is being pushed down toward Qwen3-14B to find the practical floor. Results going into the LLM guide as they come in.

What stays the same

Everything you already know from the original skill: persistent campaigns, the cinematic display companion you can Chromecast to a TV, character sheets, the DM philosophy, NPC memory, all of it. The system module architecture now lets you run any TTRPG, not just D&D 5e, by writing a system.md for your game. But if you are running D&D the experience is the same.

Claude is still the better DM

To be clear: this is not a "switch away from Claude" post. Claude Code with claude-dnd-skill is still the better experience. Better narration, model routing, deeper integration. If Claude is up and you have quota, use that.

But having a version that works when it is not is genuinely useful. And honestly, testing it has been a good reminder of how much the Python toolchain is doing independent of any specific model.

Links

u/Bobby_Gray — 1 day ago