u/Which_Work6245

Is 'ranking' dead?

8 months ago, 76% of pages cited in AI overviews were also in the top 10 organic results. Now, that number is 38%.

They are fundamentally different things, and the gap is widening.

This comes from Ahrefs study across 863,000 keywords. In 8 months, the overlap between organic rankings and AI visibility has halved.

This is due to a mechanism called query fan-out.

When you type a single query, it gets decomposed into multiple sub-queries. Each of those surfaces entirely different sources, and the final answer is assembled from all those pieces.

Which means Google never "searches" for an answer to the query that you are tracking and producing content for.

Most AEO strategy attempts to ram a new challenge into existing SEO frameworks.

Find a query you want to rank for.
Write a piece of content for it.
Measure if it gets mentioned.

But even if a user enters that exact query, Google isn't going to "search" for an answer to it. It's going to do research and compile an answer.

You would need to "rank" for those sub-queries, not the surface-level one. The fan-out queries are system-generated and usually aren't queries with any search traffic. And even if you could predict them, just think how much content you'd need to produce to hit them all.

So how do we rank?

You don't. Stop trying. Ranking is an SEO term that has no place in AEO strategy.

AI is about demonstrating fit. Influence the way AI systems view your brand and category - who you are, who you're for, the trade-offs inherent in your product.

That is the most predictive way to ensure you're surfaced to buyers across all the different prompt variations. It's closer to a sales enablement problem than an SEO problem.

Except, unlike Sales reps, AI will read your content.

reddit.com
u/Which_Work6245 — 1 day ago

Information Gain & Google's update

Google's March core update was extremely volatile, and brands that produce original research gained 22%.

Early analysis shows that brands with original research - proprietary surveys, unique frameworks, first-person benchmarks - got a huge boost in the latest update. In organic results and in AI Mode citations.

Content that synthesised existing thinking without adding anything new lost ground.

This makes sense if you think about what both AI and Humans need from your content in 2026.

An LLM can synthesise and summarise any topic on demand, tailored to any user, in seconds. That's literally what it does. So content that does the same thing - repackages what's already known - is, well, pointless.

What the model can't do is originate. It can't run a survey. It can't publish proprietary benchmarks. It can't develop an entirely new framework or concept.

In short, it can't produce Information Gain. Net new knowledge.

This update data suggests Google is now actively weighting that distinction.

We have a simple framework against which we measure every piece of content we produce.

Level 0: No information gain.
Level 1: Interpretive Gain - a new slant on existing knowledge.
Level 2: Empirical Gain - new data or original research.
Level 3: Conceptual Gain - a genuinely new framework or mental model.

How are you looking at this?

reddit.com
u/Which_Work6245 — 1 day ago

How are you AI objection handling?

"Gemini said you're complex to implement - is that true?"

An AE said this came up as an explicit objection on a sales call though.

Think about that for a second.

AI is an active market participant that you need to influence.

Because buyers trust it, and they turn to it before they turn to you.

What are you doing to 'objection handle' AI?

reddit.com
u/Which_Work6245 — 1 day ago

Our findings on LLM Convergence

In our AEO research, we've found that LLMs very rarely “search” for answers. Just 16% of the time.

This has huge implications. LLMs don't decide which brand to recommend when asked a BOFU question. They start narrowing far earlier, at TOFU/MOFU, by applying criteria to figure out which options make sense.

We call this convergence.

By the time the user asks for a recommendation and citations appear, the decision is already made. The LLM converged on an answer, rather than searching for it.

Think of it like choosing a restaurant. The decision happens at home, scrolling through reviews. By the time you're at the door seeing the pretty sign, the choice is made.

To discover this, we tracked canon concentration - how consistently the same brands surface across multiple runs of the same prompt, scored 0 to 1. Near 0 means high variability. Near 1 means the model has locked in its shortlist.

Our primary signal was how consistently the same three brands appear together across runs - what we call K3.

1/ Awareness: K3 = 0.32
↳ Different brands surface each time. No pattern yet.

2/ Consideration: K3 = 0.38
↳ The same names start appearing more often, but it's still shifting.

3/ Conversion: K3 = 0.79
↳ The same three brands, every single time.

The same pattern holds for the top brand alone (K1) and the top five (K5).

Which left us with a fairly inconvenient finding for AEO measurement.

The citations, the "best for" listicles, the directive framing (exact signals AEO tools are built to celebrate) all appear after convergence has already happened. So when your dashboard tells you you're doing brilliantly, it's probably right.

It's just not telling you why, or whether you'll still be there next quarter.

The real challenge (and opportunity) lies in influencing the direction of convergence - does the LLM push more people’s requirements in your direction. Not optimizing visibility once it’s largely been decided.

To return to the restaurant analogy. If your favourite restaurant asked you what will make a bigger difference - improving online visibility & trust, or prettying up the sign out front.

What would you tell them?

Source: Demand-Genius Dark AI Report

u/Which_Work6245 — 1 day ago