
More bad News for the GEO fabricated "AI researches and trusts brands based on x, y, z criteria" - which to be honest, I doubt they can even admit to - the story they've spun is so long and nonsesnical.
However - a study worth looking at from Ahrefs - because so few SEOs (and 0 GEOists) have the tools do this kind of analysis - you know, crawlers, having a copy of the www of pages, rank history in Google.
Obviously it validates r/SEO's held position that to appear in an LLM, you need to rank in Google first, per the Query Fan Out.
Some interesting myth debunking
GEO Myth: LLMs/AI love "fresh"
The average cited page is 500 days old
>What this all means for being “citable”
>The 1.4 million prompts paint a pretty clear picture. ChatGPT is an aggressive editor. It favors its general search index, uses semantic similarity to select and cite sources, and treats Reddit as a textbook it’s embarrassed to admit it read.
So what do you need to do to get in the final assembly (synthesized result) is up for debate.
>ChatGPT uses this data to decide which pages are worth opening and eventually citing in its response.
>That means there’s a gatekeeping layer before ChatGPT opens and reads any of your actual page content. The title, snippet, and URL are doing the heavy lifting in that initial decision.
>So we wanted to know: what actually influences that decision? Does higher semantic similarity between a page’s retrieval data and the user query increase citation likelihood? Which fields matter most? Do human-readable URLs outperform opaque ones?
>To find out, we analyzed 1.4 million ChatGPT 5.2 prompts from February 2025 (desktop) with the help of Ahrefs data scientist Xibeijia Guan.
>But before we get into the findings, you need to understand how ChatGPT actually gathers its sources—because not all URLs enter the system the same way.