I keep hitting the same wall with Claude Code and Codex: they’re great at reasoning, but every session starts from whatever context I manually feed them.
If I spent three hours yesterday mapping out architecture decisions, today I’m explaining it again.
So I built a small open-source tool called llm-wiki-compiler that acts like a knowledge compiler for your agent workflows:
- Ingest docs, URLs, and project notes
- The LLM compiles them into an interlinked markdown wiki with [[wikilinks]]
- Your agent reads it because it’s just markdown on disk
- Query outputs can be saved back in, so the base compounds over time
It’s not a chat wrapper or a vector store. It’s a persistent artifact: plain markdown, Obsidian-compatible, fully inspectable, no opaque database lock-in.
This feels like the missing layer between stateless coding agents and the long-running project memory we actually need.
Curious if other agent builders are solving this with local knowledge bases too.