u/mauro8342

I created OpenMind, a platform leading the way for long-term AI memory

I built OpenMind because I think AI companion memory is ready for its next step.

The early versions of memory in this space mattered. Facts, notes, summaries, pinned details, vector search, all of that helped companions feel less like they reset every conversation.

But I don’t think a relationship is built from isolated facts alone.

“User likes coffee” is useful.

But real memory is more than that. It remembers why something mattered, who was involved, what changed, what kept coming up, and what emotional weight was attached to it.

That is the part I became obsessed with.

So OpenMinds memory system is built around the shape of a moment, not just the nearest matching fact.

One piece of that is CFS, or Conditional Field Subtraction. CFS helps with redundancy. If the system already found a strong memory, it lowers the pull of nearby paraphrases so the AI does not waste the whole context window repeating the same thing five ways.

Another piece is CFS-R, or Conditional Field Reconstruction. CFS-R handles a different problem: sometimes the answer is not one memory. It is several partial memories that only make sense together.

So instead of only pulling:
“the kitchen budget was $40k”

OpenMind can also bring in:
“the cabinets were half the budget”
“Taylor wanted to wait”
“the contractor changed the estimate”
“you were stressed because the timing was bad”

That is the difference between remembering a fact and remembering the context around it.

I respect where AI companion memory started. The whole space has been moving toward better continuity for years. I just think the next evolution is memory that understands connection, emotional weight, and evidence across time.

reddit.com
u/mauro8342 — 13 hours ago

CFS-R: Conditional Field Reconstruction

I evaluated CFS-R on LoCoMo (1,982 questions, same setup as the CFS evaluation), holding cosine and BM25 fixed and varying only the third leg.

baseline cosine top-10:           NDCG@10 0.5123, Recall@10 0.6924
rrf(cos, BM25):                   NDCG@10 0.5196, Recall@10 0.6989
rrf(cos, BM25, MMR tuned):        NDCG@10 0.5330, Recall@10 0.7228
rrf(cos, BM25, CFS-long):         NDCG@10 0.5362, Recall@10 0.7295
rrf(cos, BM25, CFS-R top50 w3):   NDCG@10 0.5447, Recall@10 0.7303

Against tuned MMR: +1.17 pp NDCG@10 (95% CI [+0.66, +1.69], p < 0.001). Against CFS-long: +0.85 pp NDCG@10 (95% CI [+0.33, +1.35], p = 0.0006). Against baseline cosine: +3.24 pp NDCG@10, +3.79 pp Recall@10.

The sweep wasn’t fragile.. the top configurations clustered tightly between 0.5441 and 0.5447 NDCG@10, which means the operator is on a stable plateau rather than a single magic hyperparameter.

The category breakdown is where the conceptual difference shows up:

single-hop  multi-hop  temporal  open-dom  adversarial
tuned MMR              0.3479     0.6377    0.2938    0.6144     0.4705
CFS-long               0.3615     0.6376    0.2959    0.6157     0.4734
CFS-R top50 w3         0.3646     0.6344    0.2948    0.6209     0.5018

The adversarial line is the result that matters: +3.13 pp over tuned MMR, +2.84 pp over CFS-long. If the adversarial problem were only pairwise diversity, MMR should be very hard to beat but it isn’t. That supports the main claim: long-memory retrieval is not just about avoiding similar chunks. It is about reconstructing the evidence behind the query. Temporal is no longer a glaring weakness either, CFS-long still slightly leads, but CFS-R has closed the gap while keeping the adversarial gains.

https://gist.github.com/M-Garcia22/542a9a38d93aae1b5cf21fc604253718

medium.com
u/mauro8342 — 1 day ago

I created OpenMind, an AI platform that leads in long-term memory

I built OpenMind because I think AI companion memory is ready for its next step.

The early versions of memory in this space mattered. Facts, notes, summaries, pinned details, vector search, all of that helped companions feel less like they reset every conversation.

But I don’t think a relationship is built from isolated facts alone.

“User likes coffee” is useful.

But real memory is more than that. It remembers why something mattered, who was involved, what changed, what kept coming up, and what emotional weight was attached to it.

That is the part I became obsessed with.

So OpenMind’s memory system is built around the shape of a moment, not just the nearest matching fact.

One piece of that is CFS, or Conditional Field Subtraction. CFS helps with redundancy. If the system already found a strong memory, it lowers the pull of nearby paraphrases so the AI does not waste the whole context window repeating the same thing five ways.

Another piece is CFS-R, or Conditional Field Reconstruction. CFS-R handles a different problem: sometimes the answer is not one memory. It is several partial memories that only make sense together.

So instead of only pulling:
“the kitchen budget was $40k”

OpenMind can also bring in:
“the cabinets were half the budget”
“Taylor wanted to wait”
“the contractor changed the estimate”
“you were stressed because the timing was bad”

That is the difference between remembering a fact and remembering the context around it.

I respect where AI companion memory started. The whole space has been moving toward better continuity for years. I just think the next evolution is memory that understands connection, emotional weight, and evidence across time.

That is what I’m trying to build with OpenMind.

We also have things like emotional recall, adaptive pacing, character consistency, multi-character chat, vision chat, voice, and a memory highlighter that lets users see which memories influenced a response.

Still early. Still improving constantly.

But the goal is simple:

I want OpenMind to be the AI companion platform where memory actually matters.

u/mauro8342 — 3 days ago

CFS - Conditional Field Subtraction

CFS selects relevant candidates by penalizing regions already covered by previous picks.

Results on retrieval ranking:

baseline cosine top-K: NDCG@10 0.5123, Recall@10 0.6924
mem0 additive fusion: NDCG@10 0.4903, Recall@10 0.6625
rrf(cosine, BM25): NDCG@10 0.5196, Recall@10 0.6989
rrf(cosine, cos2, BM25): NDCG@10 0.5278, Recall@10 0.7060
rrf(cosine, BM25, CFS): NDCG@10 0.5311, Recall@10 0.7168

Against mem0’s additive fusion, rrf(cosine, BM25, CFS) improves retrieval ranking by +4.08 pp NDCG@10 and +5.43 pp Recall@10.

Against rrf(cosine, BM25), adding CFS contributes +1.15 pp NDCG@10 and +1.79 pp Recall@10.

https://gist.github.com/M-Garcia22/ff4ec80f5a08ca2fd9234bcc35804d1c

medium.com
u/mauro8342 — 6 days ago