u/Lazy-Kangaroo-573

Image 1 — How I solved "Conflict of Laws" in a financial RAG — 
ITA 1961 vs ITA 2025 parallel retrieval with 
graceful degradation [with screenshots]
Image 2 — How I solved "Conflict of Laws" in a financial RAG — 
ITA 1961 vs ITA 2025 parallel retrieval with 
graceful degradation [with screenshots]
Image 3 — How I solved "Conflict of Laws" in a financial RAG — 
ITA 1961 vs ITA 2025 parallel retrieval with 
graceful degradation [with screenshots]
Image 4 — How I solved "Conflict of Laws" in a financial RAG — 
ITA 1961 vs ITA 2025 parallel retrieval with 
graceful degradation [with screenshots]
Image 5 — How I solved "Conflict of Laws" in a financial RAG — 
ITA 1961 vs ITA 2025 parallel retrieval with 
graceful degradation [with screenshots]

How I solved "Conflict of Laws" in a financial RAG — ITA 1961 vs ITA 2025 parallel retrieval with graceful degradation [with screenshots]

Previous posts covered the 8-node LangGraph architecture and table extraction. This one is about a different problem I hadn't seen discussed here:

What happens when two valid versions of the same law exist simultaneously?

India currently has: - Income Tax Act 1961 (still operative) - Income Tax Act 2025 (new regime, FY 2026-27) Both are valid. Both answer "tax slab" queries differently. A naive RAG picks one. Mine picks both and reconciles.

Parallel-Firing Intent Classifier: Node 1 (Classifier) doesn't just route — it fires multiple retrieval intents simultaneously:

→ ITA 1961 namespace

→ ITA 2025 namespace

Chunk-level metadata tags resolve which regime applies to the specific query Version conflict resolved before LLM generates.

Generator receives pre-reconciled context. --- Two honest behaviors — both intentional:

Behavior 1 — Document indexed (screenshot): - Section 392 TDS on Salary

- 8 sources cited, page-level attribution - ITA 1961 + ITA 2025 cross-referenced - 61% confidence score - Response grounded 100% in retrieved chunks

Behavior 2 — Document NOT indexed (screenshot):

- 0 chunks fetched - No hallucination, no fake slabs

- Graceful degradation: general knowledge used transparently, "official context unavailable" flagged explicitly - User not left empty-handed, not given dangerous data.

This is intentional two-tier architecture: - Render free tier: light index, production stable - Local 16GB: full Acts indexed, heavy retrieval

>Note: That italic text in the "Agentic Logic" box — that's not UI decoration. That's the Classifier node's real-time Chain-of-Thought firing before any retrieval happens.
Most RAG systems are black boxes — query goes in, answer comes out, you have no idea why. This exposes the reasoning layer:
- What the query intent is
- Which Act to target
- What retrieval scope to apply
This is Agentic Reasoning, not just routing.

AMA on the conflict resolution logic or the graceful degradation implementation.

u/Lazy-Kangaroo-573 — 20 hours ago