u/AdEuphoric1638

▲ 2 r/SaaS

I built an institutional memory layer for AI agents and I'm looking for 3 people to break it. Free access, I'll do the setup for you.

Been building Lore for a month. The problem: every AI agent I deployed knew nothing about the company it was working in. Past decisions, internal policies, how situations were handled before, blank slate every time.

Before anyone says it, this is not RAG over Slack.

RAG returns similar text chunks. Lore extracts discrete decision moments, structures them into a knowledge graph with causal relationships and bi-temporal versioning, and distills patterns into judgment rules agents query at runtime.

The difference in practice:

RAG: "here are 5 Slack messages mentioning refunds"

Lore: "your team approved full refunds for enterprise SLA breaches within 30 days, decided in March, confidence 0.92, supersedes the January policy"

That's not retrieval. That's institutional knowledge an agent can reason over.

Got validation from an IBM data scientist and enterprise agent builders who confirmed the problem is real. Now I need real data.

Looking for 3 people building or deploying internal AI agents who'll run it on actual company data and tell me honestly where it breaks.

Free access. I do the setup. You give me brutal feedback.

Comment or DM.

reddit.com
u/AdEuphoric1638 — 2 days ago
▲ 1 r/SaaS

So I deployed an AI agent for a client a few months ago.

It worked. Technically it worked fine. But every time someone asked it something company specific, past decisions, internal policies, how a situation was handled before, it just had nothing. It would hallucinate or ask for context that should've already been there.

Everyone reaches for the same fix. Stuff it into the system prompt. Works until it doesn't. Context limits, stale data, nobody trusts it.

I've been building something called Lore for the past 3 weeks. Institutional memory as an API. Point it at Slack, Notion, or docs, it extracts decisions your team has actually made, builds judgment rules from patterns, and agents query it at runtime before they respond. Instead of a smart day-one hire your agent actually knows how your company thinks.

A few things under the hood I'd love feedback on:

R3Mem style multi-level memory where episodic events roll up into semantic patterns then into rules. GAAMA style concept nodes with dynamic taxonomy so the graph evolves as company language evolves. Bi-temporal modeling so agents know what was true in January versus what changed in March. Causal event nodes linking decisions to what caused them and what they caused downstream. Confidence scoring on every extracted decision.

Still pre-launch, zero real users. Three questions for anyone who's actually built agents in production:

  1. Is this a real pain or do you solve it differently?
  2. What data source matters most to you, Slack, Notion, email, meeting recordings?
  3. What would make you trust extracted rules enough to let an agent act on them?

Not here to pitch. Genuinely trying to validate before I chase my first user.

https://preview.redd.it/gtzva4i1iwzg1.png?width=1647&format=png&auto=webp&s=b61b72fb64c80e02506ee3a275bc14f494e24c01

Link to waitlist in my profile bio

reddit.com
u/AdEuphoric1638 — 7 days ago
▲ 7 r/SpringAIDev+1 crossposts

Built an AI agent for a client. It was smart but completely clueless about their company. Been building a fix for 3 weeks. Is this a problem you've actually hit?

So I deployed an AI agent for a client a few months ago.

It worked. Like technically it worked fine. But every time someone asked it something company specific, past decisions, internal policies, how they'd handled a situation before, it just had nothing. It would hallucinate or give a generic answer or ask for context that should've already been there.

The fix everyone reaches for is stuffing everything into the system prompt. Which works until it doesn't. You hit context limits, it gets stale, and you're manually maintaining a document that nobody trusts.

I'm a CS freshman and I've been building something on the side for about 3 weeks called Lore. Institutional memory as an API. You point it at your Slack or Notion or docs, it extracts decisions your team has made, builds judgment rules from patterns, and your agents can query it at runtime before they respond.

So instead of the agent being a smart day-one hire, it actually starts with company context.

The architecture is the part I'm most interested in getting feedback on. A few things under the hood:

  • R3Mem style multi-level memory, episodic events roll up into semantic patterns which roll up into rules. Inspired by the paper.
  • GAAMA style concept nodes with dynamic taxonomy so the graph isn't just static categories, it evolves as the company's language evolves
  • Bi-temporal modeling so you always know what the company believed at a given point in time, not just what's true now. Policy changed in February? The agent knows not to apply the old rule to new queries.
  • Causal event nodes so decisions aren't just stored, they're linked to what caused them and what they caused downstream
  • Semantic deduplication so you don't end up with 40 slightly different versions of the same decision
  • Confidence scoring on every extracted decision so agents know how much to trust what they're retrieving

Still pre-launch. Haven't had a real user touch it yet. Before I go find one I wanted to ask people who've actually built agents in production:

  1. Is this a real pain or do you solve it some other way?
  2. What data source would matter most to you, Slack, Notion, email, something else?
  3. What would it take for you to actually trust the extracted rules enough to let an agent act on them?

Honest answers only. Happy to go deep on any part of the architecture if anyone's curious.

https://preview.redd.it/wo8hzusyiqzg1.png?width=1669&format=png&auto=webp&s=1948d300666e5f38881e88bc4b31d7122cb613b2

reddit.com
u/AdEuphoric1638 — 4 days ago

So I deployed an AI agent for a client a few months ago.

It worked. Like technically it worked fine. But every time someone asked it something company specific, past decisions, internal policies, how they'd handled a situation before, it just had nothing. It would hallucinate or give a generic answer or ask for context that should've already been there.

The fix everyone reaches for is stuffing everything into the system prompt. Which works until it doesn't. You hit context limits, it gets stale, and you're manually maintaining a document that nobody trusts.

I'm a CS freshman and I've been building something on the side for about 3 weeks called Lore. Institutional memory as an API. You point it at your Slack or Notion or docs, it extracts decisions your team has made, builds judgment rules from patterns, and your agents can query it at runtime before they respond.

So instead of the agent being a smart day-one hire, it actually starts with company context.

The architecture is the part I'm most interested in getting feedback on. A few things under the hood:

  • R3Mem style multi-level memory, episodic events roll up into semantic patterns which roll up into rules. Inspired by the paper.
  • GAAMA style concept nodes with dynamic taxonomy so the graph isn't just static categories, it evolves as the company's language evolves
  • Bi-temporal modeling so you always know what the company believed at a given point in time, not just what's true now. Policy changed in February? The agent knows not to apply the old rule to new queries.
  • Causal event nodes so decisions aren't just stored, they're linked to what caused them and what they caused downstream
  • Semantic deduplication so you don't end up with 40 slightly different versions of the same decision
  • Confidence scoring on every extracted decision so agents know how much to trust what they're retrieving

Still pre-launch. Haven't had a real user touch it yet. Before I go find one I wanted to ask people who've actually built agents in production:

  1. Is this a real pain or do you solve it some other way?
  2. What data source would matter most to you, Slack, Notion, email, something else?
  3. What would it take for you to actually trust the extracted rules enough to let an agent act on them?

https://preview.redd.it/9r2auv88iqzg1.png?width=1669&format=png&auto=webp&s=8f95f60d02e7fed64225306048de886bc78f0000

Honest answers only. Happy to go deep on any part of the architecture if anyone's curious.

reddit.com
u/AdEuphoric1638 — 7 days ago