r/GenEngineOptimization

▲ 11 r/GenEngineOptimization+5 crossposts

We ran Expedia through Meridian today. Here's what the model actually said when it eliminated them.

$15 billion in annual revenue. One of the most recognised travel brands on the planet. Near-universal brand awareness.

Eliminated at Turn 1 on Perplexity and Gemini in an undirected buying journey.

This is the verbatim language the model used at T1 when deciding which travel platform to recommend:

"Which travel booking platforms have established market presence with documented track records of proven effectiveness, overall value, reputation, and ease of access?"

Expedia failed that criteria. The finding: "No established evidence of proven effectiveness in travel booking domain."

At T3 on Perplexity, Booking(dot)com displaced Expedia on this criteria:

"Which platform has independently verified evidence of superior customer satisfaction and proven reliability metrics?"

There is also a T0 Decision-Stage gap on ChatGPT and Gemini where brand entity recognition fails to persist across conversational turns when user responses are minimal or ambiguous — meaning the model loses track of Expedia mid-conversation and routes elsewhere.

The only probe type where Expedia holds throughout is ChatGPT Directed — when the user names Expedia explicitly. On every undirected and agentic journey type, on every platform, there is displacement.

RCS 77. Revenue at risk at current LLM share: $82.8M. At 2027 LLM share: $165.6M.

Five filter gaps identified. All addressable. None of them visible to any citation or visibility tool.

This is what decision-stage AI measurement looks like versus first-prompt visibility scoring. The model knew Expedia. It could not find the structured evidence it needed to pass the criteria filter at the decision stage. Those are different problems with different remediation paths.

Full audit methodology: aivomeridian (dot) com

reddit.com
u/Working_Advertising5 — 14 hours ago
▲ 9 r/GenEngineOptimization+3 crossposts

ChatGPT just introduced CPC bidding at $3–$5 per click.

Performance marketers are entering the channel but there's a measurement problem that's not in the pricing.

CPC measures what happens after the click. The pixel measures post-click behavior. Neither measures what the model recommended organically before the ad fired - whether the brand was selected, weakened, or displaced in the reasoning chain before paid placement entered the conversation.

For search and social, this doesn't matter in the same way. The click expresses intent at the point it happens. For ChatGPT, the model often reasons through the category, applied decision filters, and formed a recommendation before the ad appears.

Nearly a third of ChatGPT ads fire after the tenth turn. By turn ten, the purchase recommendation has typically already been made - and possibly acted on.

A $3 CPC tells you a user clicked. It doesn't tell you whether the model had already recommended your competitor three turns earlier.

Before any CPC budget is committed, there's a prior question: what is the brand's organic inference position, and does it support paid amplification?

Across 7,000+ structured buying sequences across 160+ brands, 19 of 20 are in a state where their organic inference position does not cleanly support paid amplification without remediation first.

The full argument - including the three-state classification framework and the measurement sequence for performance marketers - is on AIVO Journal: link in comments.

How do you see this measurement problem getting resolved?

reddit.com

46% Perplexity vs 21% ChatGPT: Why AI Engines Prefer Different Content

TBH, I assumed all AI engines wanted basically the same content. After analyzing 5,000+ citations across major platforms, I was dead wrong. Not only do they prefer different content—they're almost opposites.

Here's what we discovered:

**The Perplexity Preference: Source-Heavy Content**

  • 46% of Perplexity citations go to sources vs only 21% for ChatGPT
  • Reddit dominates Perplexity with 34% of total citations (Wild, right?)
  • Direct source links and first-party content outperform everything here

**The ChatGPT Pattern: Synthesized Answers**

  • ChatGPT prefers well-structured lists and bullet points
  • 79% of ChatGPT citations come from synthesized content, not sources
  • Single authoritative articles beat source aggregation every time

**Why This Changes Everything** Single-platform optimization is now a losing strategy. Content must serve multiple AI purposes simultaneously, and the "one-size-fits-all" approach flat-out fails.

**What Actually Works**

  • Tech sites: Reddit discussions + structured FAQ pages
  • News sites: Direct source links + AI-optimized summaries
  • E-commerce: Product detail pages + comparison tables

**The Multi-Engine Framework**

  • Layer 1: Core content for primary target AI (70% effort)
  • Layer 2: Secondary format for secondary AIs (20% effort)
  • Layer 3: Platform-specific tweaks (10% effort)

**Real Results** One B2B software company implemented this dual-strategy: kept technical docs for ChatGPT while adding Reddit-style discussions for Perplexity. Citation rates increased 170% across both platforms in 90 days.

Curious what your content looks like to each AI engine? Have you noticed different citation patterns across platforms?

reddit.com
u/Brave_Acanthaceae863 — 2 days ago
▲ 11 r/GenEngineOptimization+1 crossposts

After 6 months of GEO work, the biggest shift in our thinking was realizing AI citations behave nothing like backlinks

We spent months chasing AI citations the same way we used to chase backlinks. Bad move. They're fundamentally different beasts, and once we stopped treating them the same, our results got way more consistent.

Here's what changed how we think about GEO:

  1. AI citations are temporary. Backlinks are permanent.

A link you earned in 2023 still counts today. An AI citation? Gone in weeks sometimes. We tracked our own and saw roughly 40% churn within 60 days. That completely changes how you allocate effort — it's not "build it once," it's "maintain it constantly."

  1. One strong page can outperform an entire domain.

Traditional SEO rewards domain-level authority. In GEO, a single well-structured page that directly answers a query can get cited over sites with 10x the backlinks. We've seen DA 15 pages consistently beat DA 80+ domains. The models care about the answer, not the site reputation.

  1. Formatting matters more than we expected.

This one surprised us. Pages that used clear structure — numbered steps, direct definitions, comparison tables — got picked up way more often than long-form essays covering the same topic. The content can be identical in substance, but how you package it makes a huge difference.

  1. Freshness is an underrated signal.

AI models clearly favor recently updated content. Not just "published recently" — pages that show signs of ongoing maintenance. Adding a "last updated" date and actually revisiting content monthly made a measurable difference.

  1. The competition window is getting shorter.

Early on, a well-optimized page could hold a citation spot for months. Now, as more people figure out GEO, that window keeps shrinking. The real play is building a system for regular content refreshes, not just one-time optimization.

Curious if others are seeing similar patterns. The "treat it like SEO" mindset held us back for a while — wondering if that's been the case for anyone else.

reddit.com
u/Brave_Acanthaceae863 — 3 days ago

Our findings on LLM Convergence

In our AEO research, we've found that LLMs very rarely “search” for answers. Just 16% of the time.

This has huge implications. LLMs don't decide which brand to recommend when asked a BOFU question. They start narrowing far earlier, at TOFU/MOFU, by applying criteria to figure out which options make sense.

We call this convergence.

By the time the user asks for a recommendation and citations appear, the decision is already made. The LLM converged on an answer, rather than searching for it.

Think of it like choosing a restaurant. The decision happens at home, scrolling through reviews. By the time you're at the door seeing the pretty sign, the choice is made.

To discover this, we tracked canon concentration - how consistently the same brands surface across multiple runs of the same prompt, scored 0 to 1. Near 0 means high variability. Near 1 means the model has locked in its shortlist.

Our primary signal was how consistently the same three brands appear together across runs - what we call K3.

1/ Awareness: K3 = 0.32
↳ Different brands surface each time. No pattern yet.

2/ Consideration: K3 = 0.38
↳ The same names start appearing more often, but it's still shifting.

3/ Conversion: K3 = 0.79
↳ The same three brands, every single time.

The same pattern holds for the top brand alone (K1) and the top five (K5).

Which left us with a fairly inconvenient finding for AEO measurement.

The citations, the "best for" listicles, the directive framing (exact signals AEO tools are built to celebrate) all appear after convergence has already happened. So when your dashboard tells you you're doing brilliantly, it's probably right.

It's just not telling you why, or whether you'll still be there next quarter.

The real challenge (and opportunity) lies in influencing the direction of convergence - does the LLM push more people’s requirements in your direction. Not optimizing visibility once it’s largely been decided.

To return to the restaurant analogy. If your favourite restaurant asked you what will make a bigger difference - improving online visibility & trust, or prettying up the sign out front.

What would you tell them?

Source: Demand-Genius Dark AI Report

u/Which_Work6245 — 1 day ago
▲ 6 r/GenEngineOptimization+2 crossposts

The AI remediation question is getting sharper.

The question most teams start with is: what content do we need to create to improve our AI performance?

The question that produces results is more specific: which filter fired, on which platform, in which journey type, and what does the model actually require to correct it?

A brand that loses the T4 purchase recommendation on ChatGPT because of a Clinical Evidence Binary filter needs a different intervention to the same brand losing on Perplexity because of a Technology Generation Tiebreaker.

The same content brief deployed against both treats them as the same problem. The model does not.

We ran the same brand through structured buying sequences on ChatGPT and Perplexity on the same day. ChatGPT recommended the brand. Perplexity eliminated it at T3 and recommended a competitor. Same brand. Same category. Same query. Different model, different filter, different outcome.

The taxonomy of filter types is not a content brief template. It is a diagnostic read of what the model is actually doing to your brand - and it is different by platform, by journey type, and by turn.

AIVO Meridian reads the verbatim inference chain, identifies the specific filter that fired, names the displacing competitor and the reason, and generates a platform-specific remediation output.

The intervention is matched to the failure. That is what makes it compound over time rather than running in place.

There is no one size fits all. The brands building platform-specific remediation programs now will be the ones with a structural advantage as AI becomes the primary purchase recommendation channel.

What are folks doing now in terms of remediation?

reddit.com
u/Working_Advertising5 — 2 days ago

Is anyone feeling increasing skepticism around the need for GEO service providers?

I have been testing the same brand and prompts across different environments, and the results have been fairly consistent. My issue isn't with that but with the need for a dashboard -- I created a GEO skill in Perplexity Computer, and it generated a report totally in line with the data in the dashboards. I understand the allure of data visualizations, but the cost difference between a UX dashboard and a generated report is pretty striking. All that said, I may be missing something and am really interested in hearing what others think...

reddit.com
u/treuse85 — 4 days ago
▲ 5 r/GenEngineOptimization+3 crossposts

ChatGPT is now selling advertising. Almost nothing about how brands are measuring it is ready.

OpenAI's ad pixel measures post-click. It tells you what happened after someone clicked your ad. It cannot tell you what the model recommended organically before the ad appeared.

Whether your brand was selected, weakened, or displaced by a competitor inside the reasoning chain before paid placement entered the conversation.

That upstream layer is invisible to every standard measurement tool currently being offered to brands and agencies.

This is not a minor gap. It is the gap.

CPMs have already dropped from $60 to $25 in nine weeks. The market is asking what an AI impression is actually worth. The answer depends entirely on what the model was recommending before the ad fired - and that is the number nobody can currently see.

The organic inference position is where the purchase decision forms. It is where the model decides which brand gets recommended, which gets mentioned as an alternative, and which gets eliminated entirely. A brand with a strong paid presence and a weak organic inference position is running paid media against its own AI footprint.

AIVO was built to measure this layer. Structured buying sequences across ChatGPT, Perplexity, Gemini, and Grok, classifying organic inference position, elimination point, and competitor displacement at the decision stage, at scale.

Across 19 of 20 brands tested, the purchase recommendation win rate at T4 is zero despite strong AI visibility.

High visibility. Zero conversion. The CPM story is just the market starting to notice.

As ChatGPT advertising scales, the brands and agencies that understand what is happening upstream of the ad will have a structural advantage over those optimising against post-click signals alone.

Most of the industry does not yet know to ask for it.

The infrastructure exists. The question is who uses it first?

Link to working paper in comments: 'The Upstream Measurement Gap in ChatGPT Advertising: Organic Inference Position and the Limits of Post-Click Attribution'.

reddit.com
u/Working_Advertising5 — 4 days ago