Is anyone else drowning in AI context management on large codebases?
Working on a fairly large Azure microservices system (.NET, 40+ services, 5+ years old). We've adopted AI coding assistants across the team and there's genuine productivity gain for individual tasks.
But there's a problem nobody seems to talk about: every new chat session is a blank slate.
Our codebase has years of accumulated decisions:
• We use a specific handler pattern for vendor integrations
• Auth service has a specific cache-aside setup with historical reasons
• Service boundaries that look weird but make sense given our deployment constraints
• Interface conventions that all the senior engineers know but aren't written anywhere useful
When I open a new AI chat, none of that context exists. I either paste a context dump (expensive, eats token budget) or the AI generates code that's syntactically correct but architecturally wrong for our system.
We've tried:
• System prompts with architecture descriptions - partial help
• Cursor rules files - limited
• Just re-explaining every session - waste of time
I'm actually building a tool to solve this (happy to share more if there's interest) but first wanted to know — is this a widespread problem or specific to how we work?
How are experienced devs handling context management with AI assistants on mature codebases?