u/BrightOpposite

Built a system to stop AI agents from losing context mid-task

Built a system to stop AI agents from losing context mid-task

I kept running into the same issue with LangChain-style agents:

  • they lose context after a few steps
  • or worse, they retrieve the wrong past information
  • multi-step tasks start drifting

Most fixes I tried didn’t really solve it:

  • bigger context windows
  • more embeddings
  • dumping everything into a vector DB

It still breaks.

So I started experimenting with a different approach:

Instead of treating memory as “everything that happened”,
I treat it as structured state the agent carries forward.

What this looks like:

  • Separate short-term conversation vs long-term state
  • Store decisions, not just messages
  • Control what gets persisted vs ignored
  • Retrieval is based on relevance to the current step, not similarity alone

Result:

Agents stay consistent across:

  • multi-step workflows
  • tool usage
  • delayed execution

I wrapped this into a small system called BaseGrid.

It’s still early, but it’s been working much better than typical memory setups.

👉 https://basegrid.io

Would love feedback from others building agents—especially if you’ve hit similar issues.

u/BrightOpposite — 1 day ago