u/killerexelon

▲ 11 r/semanticweb+3 crossposts

Knowledge Graphs to tackle the problem of searching code and documentation again and again with help of Mnemo

This is what your codebase actually looks like.

2032 nodes. 2878 edges. 7 relationship types.

Every service. Every dependency. Every API. Every owner. Every connection your team built over years — visualised in one graph.

Most AI coding assistants see none of this.

They see the file you have open.
Maybe the files you paste in.
Nothing else.

So when they generate code, they generate it blind.
No knowledge of what depends on what.
No knowledge of what breaks if you change something.
No knowledge of the relationships your team spent years building.

This is the real problem with AI in enterprise development.
It's not capability. The models are powerful.

It's context. AI operates on a fraction of the knowledge your senior engineers carry in their heads.

Mnemo builds this knowledge graph automatically from your codebase.

Services and their boundaries.
APIs and their consumers.
Dependencies and their blast radius.
Files and their owners.
Decisions and their history.

And then makes all of it available to your AI assistant — automatically, on every session.

No more blind generation.
No more code that compiles but breaks something downstream.
No more AI that doesn't know why things are the way they are.

This is what AI-assisted development should actually look like.

🔗 github.com/Mnemo-mcp/Mnemo

Drop a comment if you've ever had AI break something it didn't know existed.

u/killerexelon — 23 hours ago
▲ 4 r/ContextEngineering+3 crossposts

Is anyone else drowning in AI context management on large codebases?

Working on a fairly large Azure microservices system (.NET, 40+ services, 5+ years old). We've adopted AI coding assistants across the team and there's genuine productivity gain for individual tasks.
 
But there's a problem nobody seems to talk about: every new chat session is a blank slate.
 
Our codebase has years of accumulated decisions:
• We use a specific handler pattern for vendor integrations
• Auth service has a specific cache-aside setup with historical reasons
• Service boundaries that look weird but make sense given our deployment constraints
• Interface conventions that all the senior engineers know but aren't written anywhere useful
 
When I open a new AI chat, none of that context exists. I either paste a context dump (expensive, eats token budget) or the AI generates code that's syntactically correct but architecturally wrong for our system.
 
We've tried:
• System prompts with architecture descriptions - partial help
• Cursor rules files - limited
• Just re-explaining every session - waste of time
 
I'm actually building a tool to solve this (happy to share more if there's interest) but first wanted to know — is this a widespread problem or specific to how we work?
 
How are experienced devs handling context management with AI assistants on mature codebases?

reddit.com
u/killerexelon — 3 days ago