How are you actually measuring LLM perception drift
So I keep seeing "LLM perception drift" come up everywhere lately. Basically the idea that how ChatGPT or Perplexity talks about your brand can shift over time even if you haven't changed anything on your end. Which... yeah, that tracks with what I've been noticing.
The part that's messing with me is how you're supposed to measure this. These models are probabilistic - ask the same question 50 times and you'll get 50 slightly different answers. I read about one approach that's basically election-style polling: you define a few hundred high-intent queries, run them on a schedule, and look for trends in the aggregate. Cool in theory, but that's a LOT of work compared to just checking your Google rank.
And then there's the analytics blind spot. Apparently only ~20% of ChatGPT brand mentions come with clickable citation links that actually show up in GA4. The other 80% - all the "we recommend X" and "compared to Y" stuff that actually drives decisions - is basically invisible. So even if you are tracking citations, you're probably only seeing a sliver of what's happening.
Genuinely curious what people here are doing. Manual spot checks? One of the new LLM rank trackers? Some homebrew script? Or is this still in the "yeah we should probably figure that out" bucket for most of us?