u/DazzlingCaramel5661

▲ 3 r/Bannerflow+1 crossposts

Display advertising has a memory problem

The campaign ran. The results came back. Impressions, clicks, maybe some conversion data.

And then someone in the room asks: so what should we do differently next time?

For most display advertising teams, that question is surprisingly hard to answer well. Not because the data doesn't exist, it usually does, somewhere. But because the way most teams produce and track display creative isn't set up to generate useful answers.

The question that actually matters

Campaign analytics tells you what happened at the campaign level. But display advertising involves creative decisions, headlines, visuals, formats, calls to action, localisation choices. and campaign-level data doesn't tell you which of those decisions drove the result.

Did the animated banner outperform the static one? 
Did the short copy variant beat the long one in this market? 
Was the product shot more effective than the lifestyle image for this audience?

These are creative-level questions. Most reporting tools are built to answer audience and placement questions. The creative layer gets skipped.

Why this happens structurally

It's not a data problem, it's a setup problem. 

A few common reasons:

Creative variants aren't named or tagged consistently. When 40 banner sizes get exported from a design tool with generic filenames, there's no reliable way to compare performance across variants later. You can't analyse what you can't identify.

Performance data and creative files live in different places. The media platform has the numbers. The creative files are in a shared drive or a production tool. Nobody has connected them into a single view, so analysis requires manual work that rarely happens under deadline pressure.

Optimization happens at the wrong level. Teams optimize audiences and placements because that's what the platform makes easy. Creative gets swapped out based on gut feel or stakeholder preference, not evidence.

Learnings don't make it back into the brief. Even when someone does pull together post-campaign insights, they're usually presented in a report that gets filed away. The next brief gets written from scratch, carrying the same assumptions as the last one.

What it looks like when it works

Teams that do this well have a few things in common.

They go into a campaign with a documented hypothesis about the creative: 

“We expect the version with urgency copy to outperform the generic version because our last two campaigns showed that pattern.”

That hypothesis creates a structure for what to look at after launch.

They keep creative naming and tagging consistent across every campaign, so that when they want to look back across six months of activity, the data is actually comparable.

They have someone whose job it is to translate post-campaign data into brief input. We’re not just talking about a slide deck, but specific, documented decisions that change what gets made next time.

Over time, the team builds a real picture of what works for their brand and their audiences. Each campaign is a little smarter than the last one. That's a genuine competitive advantage in a channel where most teams are still resetting from scratch every quarter.

The production connection

One underappreciated benefit of running display creative production through a dedicated platform rather than scattered tools is that consistency is built in. When all your variants are created, named, and managed in one place, the data you need to do this kind of analysis is already structured correctly. You're not trying to reverse-engineer comparability from a mess of files, it's there from the start.

This is part of why creative production and performance analysis are more connected than most teams treat them. How you build the creative affects what you can learn from it.

The practical starting point

If your team wants to get better at this without overhauling everything at once, one simple habit makes a big difference: before every campaign, write down two or three specific creative hypotheses. Don’t over complicate it. After the campaign, come back to those hypotheses and see what the data says.

That's it. No new tools required at first. Just the discipline of treating each campaign as a test with a question attached to it, rather than a production task with a deadline.

How does your team currently handle the feedback loop between campaign performance and the next creative brief? Have you figure out a scalable way to do it or it's still mostly manual?

reddit.com
u/DazzlingCaramel5661 — 7 days ago

We’re working on something to remove one of the biggest campaign bottlenecks 👀

If you’ve ever had to wait on campaign setup before you can even start building creatives… you know how frustrating that is.

We see this a lot, teams ready to go, ideas lined up, but stuck behind platform configs, permissions, or setup steps that slow everything down.

So we’ve been working on a new Social Campaign Manager workflow to change that.

The idea is simple: start building ads before the full campaign setup is done

That means:

  • you can launch campaigns faster (no more waiting around to get started)
  • teams can actually work in parallel instead of blocking each other
  • less manual work when managing multiple campaigns across platforms

Basically, removing that “setup bottleneck” so creative and execution can move at the speed they should.

This is our first iteration, planned for release in Q2.

https://studio.saltfish.ai/demo-share/8337b51d-4ab7-4367-bf9a-50d11bc9fcc7

u/DazzlingCaramel5661 — 22 days ago