







We tracked AI citations across our enterprise clients for 90 days. The pattern surprised us
About 6 months ago, my team at MonsterClaw, where I work, started taking AEO/GEO seriously for our enterprise clients. Not the LinkedIn-thread version of "optimize for AI." Actual work, restructuring content for grounding, fixing entity signals, rebuilding internal linking for passage retrieval, the boring stuff.
Wanted to share what 90 days of data looks like, because I haven't seen a lot of real numbers floating around this sub on AI citations yet. Client names blurred for obvious reasons.
Site 1 — DTC ecom (vape/disposable space):
- 25.1K total citations in 3 months on Bing's AI Performance tab
- Top grounding query: "Elf Bar customer experience review" — 1,000+ citations alone
- Pattern: Bing AI is grounding heavily on review and evaluation intent, not raw product keywords. "Elf Bar smoke free products evaluation" pulled 557 citations. "Vape brands" pulled 691.
Site 2 — animation/video production B2B:
- 10K total citations over the same window
- One query — "best animation studios in 2026" — pulled 11.8K grounding hits across the period
- Lesson: listicle-intent + future-year modifier is doing absurd work in LLM grounding right now
Site 3 — a larger client (can't share vertical):
- 76.7K citations in 90 days, ~104 avg cited pages per day
- Steadier curve, less spike-driven, because the topical authority is deeper
I cross-referenced everything in Semrush's AI Search tab and Ahrefs' AI citations. The triangulation matters because each tool sees a different slice:
- Bing WMT = Microsoft Copilot + partners (the actual ground truth for one ecosystem, free, criminally underused)
- Semrush AI Search = breaks it down by ChatGPT, AI Overview, AI Mode, Gemini separately
- Ahrefs = caught a 0 → 640 jump in Grok citations on one site that the others didn't surface
What actually moved the needle (the part nobody talks about):
- Comparison and "evaluation" content outranks product content in AI grounding. Our product pages don't get cited. Our "X vs Y" and "[brand] review" pages do. Heavily.
- Entity consistency across the site matters more than I expected. Same brand name, same product names, same spec language across every page. Models seem to grade you on internal coherence.
- The freshness signal is real but not the way people think. It's not about publishing dates, it's about the content referencing recent events, recent product versions, current-year terms. "Best animation studios in 2026" is doing the work, not the publish date.
- One well-structured page can carry a domain. That 11.8K query on Site 2 is one URL. One. The rest of the site has decent citation distribution but that single page is roughly 30% of total AI visibility.
- Bing's tab is the cheapest GEO intelligence on the planet right now and it's free. Most agencies I've talked to don't even know it exists.
Curious what other folks are tracking. Anyone running citations as a KPI for clients yet, or is it still a "nice to have" metric on your dashboards?
Happy to go deeper on any of the above in the comments.