I got tired of fixing the same mistakes every new Claude Code session. You correct the agent, it learns for that session, then forgets everything the next day.
So I built Cortex: a CLI that sits alongside your coding sessions and keeps a personal library of constraints. Every time you correct an AI agent, it distills that correction into a rule. Next session, the most relevant rules get injected into a CORTEX.md file that your agent reads before responding.
It also mines your git history on first run. Say, if you've been reverting and fixing bad patterns for months, it picks those up too.
You can use it with any model. I've tested it with Claude (via Anthropic), Llama (via NVIDIA NIM), and local models through Ollama.
Tech stack:
- Python CLI (Click)
- Rust for the fast AST trigger matching
- Pydantic for constraint schema validation
- Works with any OpenAI-compatible API
It's early, but solid, and the core loop (start → observe → stop → distill → retrieve) works end to end.
Still a lot to build: cross-repo constraint sharing, an incident feed (so production alerts can generate constraints), impact scoring, and a visual dashboard. I'm open-sourcing it now because I'd love contributors to help with those and also any other issues identified.
GitHub: https://github.com/Ajodo-Godson/cortex
Happy to answer questions or hear what you think!