u/pauliusztin

From 17 node types to 6: my 11-step GraphRAG pipeline, what worked, and what's still broken

From 17 node types to 6: my 11-step GraphRAG pipeline, what worked, and what's still broken

While building a financial assistant for an SF start-up, we learned that AI frameworks add complexity without value. When I started building a personal assistant with GraphRAG, I carried that lesson but still tried LangChain's MongoDBGraphStore. It gave me a working knowledge graph in 10 minutes.

Then I looked at the data. I had 17 node types and 34 relationship types from just 5 documents, including three versions of "part of". GraphRAG is a data modeling problem, not a retrieval problem.

The attached diagram shows the full 11-step pipeline I ended up with. Here is a walkthrough of what you can learn from each step.

So basically, in steps 1 and 2 of the data pipeline, raw sources go through an Extract, Transform, Load (ETL) process. They land as documents in a MongoDB data warehouse. Each document stores the source type, URI, content, and metadata.

Then in step 3, we clean the documents and split them into token-bounded chunks. We started with 512 tokens with a 64-token overlap. Still, we have to run more tests on this.

The thing is, step 4 handles graph extraction. We defined a strict ontology. An ontology is just a formal contract defining exactly what categories and relationships exist in your data. We used 6 node types and 8 edge types. The LLM can only extract what this ontology allows.

For example, if it outputs a PERSON to TASK connection with an EXPERIENCED edge, the pipeline rejects it. EXPERIENCED must connect a PERSON to an EPISODE.

We also split LLM extraction from deterministic extraction. We create structural entries like Document or Chunk nodes without LLM calls.

Turns out, step 5 for normalization is the hardest part. We use a three-phase deduplication process. We do in-memory fuzzy matching, cross-document resolution against MongoDB, and edge remapping.

Anyway, in step 6, we batch embed the nodes. The system uses a mock for tests, Sentence Transformers for development, and the Voyage API for production.

Ultimately, in steps 7 and 8, nodes and edges are stored in a single MongoDB collection as unified memory. We use deterministic string IDs like "person:alice" to prevent duplicates. MongoDB handles documents, $vectorSearch$text, and $graphLookup in one aggregation pipeline. The $graphLookup function natively traverses connected graph data directly in the database. You don't need Neo4j + Pinecone + Postgres for most agent use cases. A single database like MongoDB gets the job done really well. Through sharding, you can scale it up to a billion records.

To wrap it up, steps 9 through 11 cover retrieval. The agent calls tools through an MCP server. It uses search memory with hybrid vector, text, and graph expansion, alongside query memory for natural language to MongoDB aggregation. The agent also uses ingest tools to write back to the database for continual learning.

Here are a few things I am still struggling with and would love your opinion on:

  • How are you handling entity/relationship resolution across documents?
  • What helped you the most to optimize the extraction of entities/relationships using LLMs?
  • How do you keep embeddings in sync after graph updates?

Also, while building my personal assistant, I have been writing about this system on LinkedIn over the past few months. Here are the posts that go deeper into each piece:

P.S. I am also planning to open-source the full repo soon.

TL;DR: Frameworks create messy graphs. Define a strict ontology, extract deterministically where possible, use a unified database, and accept that entity resolution will be painful.

u/pauliusztin — 8 hours ago

5 documents, 17 node types, 34 relationships. That's when I stopped using LangChain for GraphRAG.

While building a financial assistant for an SF start-up, we made the mistake of integrating multi-layered frameworks like LlamaIndex and Retrieval-Augmented Generation (RAG) pipelines that added zero business value. LlamaIndex prompts broke on every upgrade. LiteLLM fell behind the latest Gemini features. RAG was overkill for our small data. We quickly learned to stop following trends and build from scratch when the tools do not fit.

Next, when I started building my personal assistant with GraphRAG, I carried that lesson forward. I tried LangChain's MongoDBGraphStore just to see what was out there, and it gave me a working knowledge graph in 10 minutes.

Turns out, when I looked at the actual data, the LLM produced 17 node types and 34 relationship types from just 5 documents. I saw three different versions of "part_of" alone. So basically, frameworks make it easy to start but impossible to scale.

The thing is, GraphRAG is a data modeling problem, not a retrieval problem. Most tutorials skip the ontology and let the model extract freely. That works at 10 documents but breaks at 1,000.

I switched to an ontology-first design. I defined 6 node types: PERSON, TASK, EPISODE, and PREFERENCE, plus structural DOCUMENT and CHUNK nodes. I also defined 8 edge types with strict constraints.

The AI can only pull what the ontology allows. If the system outputs a PERSON to TASK relationship with an EXPERIENCED edge, the pipeline rejects it. EXPERIENCED must connect a PERSON to an EPISODE.

I also split the AI guessing from the fixed code rules. The model identifies specific entities (Person, Task, Episode, Preference). Meanwhile, the pipeline programmatically creates structural entries like DOCUMENT and CHUNK nodes, along with PART_OF, NEXT, and MENTIONS edges, without any LLM calls.

For storage, I use a single collection in MongoDB. Nodes and edges live together, distinguished by a "kind" field. We use deterministic string IDs.

A node gets an ID like "person:alice", while an edge gets an ID like "person:alice|todo|task:write book". This prevents duplicates and ensures safe, repeatable updates.

MongoDB handles documents, $vectorSearch$graphLookup, and $text queries in one aggregation pipeline. Most agents just require user state, semantic retrieval, and bounded graph expansion of 2 to 3 hops. You do not want the extra complexity of multiple database such as Neo4j + Pinecone + Postgres unless your system demands deep traversal (5+ hops) or billions of vectors. MongoDB keeps it simple while getting the job done.

The ingestion pipeline processes raw content into 512-token chunks with a 64-token overlap. The model pulls entities using the ontology schema in the prompt, and the code creates structural entries. Then we run a three-phase entity resolution process (in-memory dedup, cross-document resolution against MongoDB, and edge remapping).

At query time, we run hybrid retrieval using Reciprocal Rank Fusion (RRF) to find the "seed" nodes, then 2-3 hops from there to find relevant relationships.

I will be honest about what is still broken. Entity resolution is a nightmare. Fuzzy matching catches obvious duplicates but misses semantic equivalences like "Paul" versus "Paul Iusztin" versus "Iusztin, Paul".

Embeddings go stale after you update node properties. Extraction quality varies because cheaper models trade accuracy for cost.

Production GraphRAG with strict ontologies is still very early, and this is genuinely a work in progress.

Here are a few things I am still struggling with and would love your opinion on:

  • How are you handling entity/relationship resolution across documents?
  • What helped you the most to optimize the extraction of entities/relationships using LLMs?
  • How do you keep embeddings in sync after graph updates?

TL;DR: GraphRAG is a data modeling problem, not a retrieval problem. Design the ontology first, use a single MongoDB collection for nodes and edges, and accept that entity resolution is still the hardest unsolved piece.

reddit.com
u/pauliusztin — 11 hours ago