
Open-source memory system for long-term collaboration with AI — episodic memory + world model, multi-user, git-tracked
I do independent research (AI/ML) and work on long-running software projects with Claude Code, some spanning many months. To work with AI effectively over weeks, months, or even years, you need detailed memory: what was done, what was tried, what worked, what didn't, why certain decisions were made, how things work in the project, what the current state is. The existing Claude Code memory system is not designed for this.
So I built **ai-collab-memory** — a structured methodology that gives the AI persistent episodic memory and a world model, all in plain text files tracked in git.
I'm looking for developers, researchers, or anyone working on long-running projects with AI to test it and share their feedback.
**What it does:**
- **Episodic memory** — an append-only history of what was done, decided, and learned. Nothing gets pruned — you can always trace back to the reasoning behind past decisions.
- **World model** — the AI's current understanding of your project: context, preferences, domain knowledge, procedures, current state. Maintained and updated as things change.
- **In-context awareness** — compact indexes are always loaded in the AI's context window, so the AI *knows what it knows* without having to search. It can make connections to prior work without you asking.
- **Multi-user** — every note includes user attribution. Commit the memory files to a shared repo and the whole team benefits. New members get up to speed through the AI's accumulated knowledge.
**How to install:**
Ask Claude Code:
> "Install the long-term collaboration memory system by cloning https://github.com/visionscaper/ai-collab-memory to a temporary location and following the instructions in it."
Installation takes about 5 minutes and one confirmation. The system activates on the next session. I highly recommend reading the README, especially "Working with the Memory System" and "How It Works".
**Some practical benefits I've experienced:**
- Working with the AI over months on the same project — it knows the history, the constraints, the decisions and their reasoning.
- The AI's responses are grounded in accumulated project context, not just what's in the current session.
- In a team setting, the AI has an overview of what everyone has done. All history is user-attributed.
Although this needs further validation, because the AI has much more context, fewer tokens should be spent on reanalysing code bases and data.
The system is actively being developed and tested. Feedback and experience reports are very welcome — file issues at the GitHub repo or comment here.