The Problem with "AI Memory" Today
Local AI agents forget context between sessions. RAG forces them to manually query a "library". Mnemostroma changes that: background memory layer that auto-captures, ranks, and injects relevant context. The agent focuses on the task, not "what to remember."
Automatic Capture: Zero Overhead
Key: Agent never writes memory. Observer (background layer) intercepts via proxy:
- 0.1ms filter: drops noise.
- Auto-classify: decision/fact/deadline.
- 20ms search: top-3 relevant + exact anchors (URLs, dates).
0 output tokens spent on summaries - full decoupling.
Continuity: Experience Builds Up
- Intuition signals: patterns mature (5>100 sessions), suggest "do this/avoid that."
- Dreamer: resolves conflicts during idle time.
- Temporal: infers "when it happened" without timestamps.
Critical items (principles, deadlines) never dissolve.
v1.11 Stats
- 485 sessions, 4.3 MB SQLite.
- 471 MB RAM baseline.
- Logs: 79% RAM stabilization, 10% intuitions.
Offline, local only.
Next: How Dissolver fits 1000s sessions in 600 MB without losing key experience. Questions?