
Last Tuesday Anthropic released Dreams for Claude Managed Agents. It's a memory cleanup pipeline: feed it a memory store and up to 100 session transcripts, get back a new store with duplicates merged and stale entries replaced. The same week, building ClawVault (a life-admin agent for busy parents), we shipped a 12-skill self-improvement wire.
We were solving the capture half of the problem: every UI edit a parent makes writes a learning to a per-owner journal. Anthropic was solving the consolidation half: clean up the journal once it gets messy. When I read their docs, three things they got right made me notice things we'd missed.
Memory versions. Every mutation in a Managed Agents memory store creates an immutable version with 30-day retention. There's a redact endpoint for compliance. There's optimistic concurrency via SHA-256 preconditions. We have none of that. Our .learnings/ files are bare markdown in a per-owner GCS volume. If a learning leaks PII, we can edit the file, but the previous version is gone. If two writes race during a task, one wins silently. We need versioning and we don't have it.
The 100KB per-memory cap. Anthropic's docs say to structure memory as many small focused files, not a few large ones. We don't enforce a cap. Our health-companion.mdcould grow to 50MB if someone hammered the wire. The cap isn't arbitrary. Forcing small files makes consolidation tractable and audit visible.
Read-only vs read-write access modes. Memory stores attach to sessions with an explicit access mode. The docs warn about prompt injection writing to memory: a successful injection in one session corrupts every session that reads that store afterward. Our agent has full read-write on every .learnings/ file. We've been lucky. We need to think harder about read-only mounts for shared reference material versus read-write for active learning.
The thing I keep coming back to: Anthropic and I converged on the same hard rule from opposite directions. Their Dreams output is a new memory store, never modifying the input. Our self-improvement skill is append-only with Status: superseded for stale entries. Both of us locked in input-immutable journals before we'd seen each other's work. The pattern is universal. When a model curates a journal, the journal has to stay auditable.
We can't use Dreams directly. Our architecture rule blocks direct Anthropic API calls. Even if it didn't, we don't have Anthropic-side memory_store_id or session_id primitives to pass. So the integration plan is an inspired re-implementation: a sibling skill dream-consolidator running on a Cloud Run cron, reading our existing .learnings/ files through the OpenClaw gateway we already use, writing a new consolidated file alongside the raw entries. About 4-5 hours of work.
But not yet. We just shipped the wire that creates the journals. Until I have a week of production traffic to confirm the redact discipline holds, dreaming is premature. Consolidating a journal full of PII leaks would amplify the leak. The wire goes first. The cleanup pass comes later.
If you're shipping agent products, the lesson from both architectures is the same. You need a capture pipe and a consolidation pass. You need version history. You need access modes. Don't skip the boring infrastructure.