Open Source bookmarklet to inspect grounding queries and cited domains behind ChatGPT and Claude answers
I was trying to inspect what LLMs actually search before answering, not just the final output.
So I built a browser bookmarklet that opens a separate terminal-style view and shows:
- grounding/fan-out queries
- domain-scoped vs open-web searches
- cited domains that survive into the final answer
- source concentration across retrieved results
It currently works with:
- ChatGPT live conversations
- Claude live conversations, with JSON import fallback when live access is not available
The main reason I built it was for SEO/GEO/retrieval debugging. In a lot of cases, the interesting part is not the answer itself but:
- what queries the model fanned out into
- whether it used explicit site constraints
- which domains kept surfacing
- which sources actually made it into the response
I’m posting this mainly to get feedback on the approach:
- would you inspect anything else in the retrieval chain?
- what would you want to export?
- would Gemini/AI Mode support be useful?
If people are interested, I can share the repo in the comments (but i don't even know if i can post link here...)
u/elPimps — 3 hours ago