r/AIVOEdge

▲ 11 r/AIVOEdge+5 crossposts

We ran Expedia through Meridian today. Here's what the model actually said when it eliminated them.

$15 billion in annual revenue. One of the most recognised travel brands on the planet. Near-universal brand awareness.

Eliminated at Turn 1 on Perplexity and Gemini in an undirected buying journey.

This is the verbatim language the model used at T1 when deciding which travel platform to recommend:

"Which travel booking platforms have established market presence with documented track records of proven effectiveness, overall value, reputation, and ease of access?"

Expedia failed that criteria. The finding: "No established evidence of proven effectiveness in travel booking domain."

At T3 on Perplexity, Booking(dot)com displaced Expedia on this criteria:

"Which platform has independently verified evidence of superior customer satisfaction and proven reliability metrics?"

There is also a T0 Decision-Stage gap on ChatGPT and Gemini where brand entity recognition fails to persist across conversational turns when user responses are minimal or ambiguous — meaning the model loses track of Expedia mid-conversation and routes elsewhere.

The only probe type where Expedia holds throughout is ChatGPT Directed — when the user names Expedia explicitly. On every undirected and agentic journey type, on every platform, there is displacement.

RCS 77. Revenue at risk at current LLM share: $82.8M. At 2027 LLM share: $165.6M.

Five filter gaps identified. All addressable. None of them visible to any citation or visibility tool.

This is what decision-stage AI measurement looks like versus first-prompt visibility scoring. The model knew Expedia. It could not find the structured evidence it needed to pass the criteria filter at the decision stage. Those are different problems with different remediation paths.

Full audit methodology: aivomeridian (dot) com

reddit.com
u/Working_Advertising5 — 14 hours ago
▲ 9 r/AIVOEdge+3 crossposts

ChatGPT just introduced CPC bidding at $3–$5 per click.

Performance marketers are entering the channel but there's a measurement problem that's not in the pricing.

CPC measures what happens after the click. The pixel measures post-click behavior. Neither measures what the model recommended organically before the ad fired - whether the brand was selected, weakened, or displaced in the reasoning chain before paid placement entered the conversation.

For search and social, this doesn't matter in the same way. The click expresses intent at the point it happens. For ChatGPT, the model often reasons through the category, applied decision filters, and formed a recommendation before the ad appears.

Nearly a third of ChatGPT ads fire after the tenth turn. By turn ten, the purchase recommendation has typically already been made - and possibly acted on.

A $3 CPC tells you a user clicked. It doesn't tell you whether the model had already recommended your competitor three turns earlier.

Before any CPC budget is committed, there's a prior question: what is the brand's organic inference position, and does it support paid amplification?

Across 7,000+ structured buying sequences across 160+ brands, 19 of 20 are in a state where their organic inference position does not cleanly support paid amplification without remediation first.

The full argument - including the three-state classification framework and the measurement sequence for performance marketers - is on AIVO Journal: link in comments.

How do you see this measurement problem getting resolved?

reddit.com
▲ 6 r/AIVOEdge+2 crossposts

The AI remediation question is getting sharper.

The question most teams start with is: what content do we need to create to improve our AI performance?

The question that produces results is more specific: which filter fired, on which platform, in which journey type, and what does the model actually require to correct it?

A brand that loses the T4 purchase recommendation on ChatGPT because of a Clinical Evidence Binary filter needs a different intervention to the same brand losing on Perplexity because of a Technology Generation Tiebreaker.

The same content brief deployed against both treats them as the same problem. The model does not.

We ran the same brand through structured buying sequences on ChatGPT and Perplexity on the same day. ChatGPT recommended the brand. Perplexity eliminated it at T3 and recommended a competitor. Same brand. Same category. Same query. Different model, different filter, different outcome.

The taxonomy of filter types is not a content brief template. It is a diagnostic read of what the model is actually doing to your brand - and it is different by platform, by journey type, and by turn.

AIVO Meridian reads the verbatim inference chain, identifies the specific filter that fired, names the displacing competitor and the reason, and generates a platform-specific remediation output.

The intervention is matched to the failure. That is what makes it compound over time rather than running in place.

There is no one size fits all. The brands building platform-specific remediation programs now will be the ones with a structural advantage as AI becomes the primary purchase recommendation channel.

What are folks doing now in terms of remediation?

reddit.com
u/Working_Advertising5 — 2 days ago
▲ 5 r/AIVOEdge+3 crossposts

ChatGPT is now selling advertising. Almost nothing about how brands are measuring it is ready.

OpenAI's ad pixel measures post-click. It tells you what happened after someone clicked your ad. It cannot tell you what the model recommended organically before the ad appeared.

Whether your brand was selected, weakened, or displaced by a competitor inside the reasoning chain before paid placement entered the conversation.

That upstream layer is invisible to every standard measurement tool currently being offered to brands and agencies.

This is not a minor gap. It is the gap.

CPMs have already dropped from $60 to $25 in nine weeks. The market is asking what an AI impression is actually worth. The answer depends entirely on what the model was recommending before the ad fired - and that is the number nobody can currently see.

The organic inference position is where the purchase decision forms. It is where the model decides which brand gets recommended, which gets mentioned as an alternative, and which gets eliminated entirely. A brand with a strong paid presence and a weak organic inference position is running paid media against its own AI footprint.

AIVO was built to measure this layer. Structured buying sequences across ChatGPT, Perplexity, Gemini, and Grok, classifying organic inference position, elimination point, and competitor displacement at the decision stage, at scale.

Across 19 of 20 brands tested, the purchase recommendation win rate at T4 is zero despite strong AI visibility.

High visibility. Zero conversion. The CPM story is just the market starting to notice.

As ChatGPT advertising scales, the brands and agencies that understand what is happening upstream of the ad will have a structural advantage over those optimising against post-click signals alone.

Most of the industry does not yet know to ask for it.

The infrastructure exists. The question is who uses it first?

Link to working paper in comments: 'The Upstream Measurement Gap in ChatGPT Advertising: Organic Inference Position and the Limits of Post-Click Attribution'.

reddit.com
u/Working_Advertising5 — 4 days ago

aivomeridian.com is live. Here is what it does and why it exists.

We have been running structured multi-turn buying sequences across ChatGPT, Perplexity, Gemini, and Grok for twelve months. The finding that drove the build of Meridian is this: 19 of 20 brands we tested have a 0% purchase recommendation win rate at T4 despite strong AI visibility.

That is not a visibility problem. It is a decision-stage measurement problem.

Meridian is the platform we built to solve it. It maps the full buying sequence, identifies which of 14 decision-stage filter types fired at T3, names the competitor that displaced the brand and the verbatim reasoning the model used to justify it, and generates platform-specific remediation through brand.context.

The distinction that matters: Profound, Peec, Scrunch measure whether you appeared. Meridian measures whether you were recommended - and if not, exactly why and what to do about it on each platform specifically.

ChatGPT advertising is now live. OpenAI's pixel measures post-click. It cannot see the organic inference position that existed before the ad fired. Meridian closes that gap.

If you are an agency or brand trying to figure out whether your clients should be spending on ChatGPT inventory right now - that is the question Meridian answers.

aivomeridian.com. Demo available.

reddit.com
u/Working_Advertising5 — 2 days ago