Hey,
I've been frustrated with how traditional RAG handles complex queries. If your question requires 3+ reasoning hops — like "What decisions did the architecture team make last sprint that affect the auth module?" — vanilla RAG either misses chunks or hallucinates connections that don't exist.
The core issue: vector similarity retrieval treats your knowledge base as a flat pool of embeddings. It has no concept of relationships between entities.
What I built
kontext-brain-ts is a TypeScript-native library that replaces flat vector retrieval with ontology graph-based context navigation.
Instead of "find top-k similar chunks", it traverses a 3-layer ontology graph with configurable N-depth pipelines — so it can follow entity relationships across documents the same way a human analyst would.
Key design decisions:
OCP-compliant — navigation strategies and data sources are separated by interface, so you swap them without touching core logic
MCP adapters built-in — Notion, Jira, GitHub, Slack out of the box
TypeScript-native (a Kotlin/JVM version also exists if that's your stack)
Benchmark results
Tested against GraphRAG-Bench and MuSiQue (multi-hop QA datasets):
Method
Recall
Vanilla RAG
0.73
kontext-brain
1.00
The multi-hop cases (3-4 hops) are where the gap is most dramatic. Standard RAG simply doesn't traverse — kontext-brain does.
Who this is for
You're building an LLM app over structured knowledge (docs, tickets, codebase, wikis)
Your queries require reasoning across multiple documents, not just within one
You want something that's not Python-only (most graph RAG libs are — GraphRAG, LightRAG, Cognee, etc.)
Feedback very welcome, especially if you've worked with GraphRAG or LightRAG — curious how the traversal strategies compare in your use cases.
github.com/hj1105/kontext-brain-ts