u/ujet-cx

CX leader with 20 years experience says CSAT "looks great on a dashboard but means nothing" -- agree or disagree?

TL;DR: 20-year+ CX leader says CSAT captures a moment, not a journey. Argues for analyzing full conversational data instead. Does this match what you're seeing? How are you doing it?

Sharing a clip from a video series we produce called Heard in CX. (I'm on the r/UJET team, full transparency but I'm here with a conversation I think this community will have opinions on)

Our guest Michael Windler, Director of Customer Operations at NerdWallet, was asked to finish the sentence: "the metric that looks great on a dashboard but means nothing is..."

He said CSAT.

He explained it's one question at one moment in time, from the customers who actually completed the survey. You're missing the full journey, and ofcourse, you're missing the customers who didn't bother responding at all.

His solution was to instead analyze all conversational data, not just survey responses. Build metrics from what customers are actually saying, across every interaction.

So here's the questions, curious what this community thinks: is CSAT still doing useful work in your org or has it become a vanity metric? And if you've moved away from it, what replaced it? How are you vetting the multitude of tools out there?

Episode is here if you want the full context:

https://reddit.com/link/1sr5pab/video/40wow2wpbfwg1/player

if you're curious, here's how we're approaching the issue over at ujet.cx

reddit.com
u/ujet-cx — 17 hours ago
▲ 3 r/UJET+1 crossposts

The Layered Intelligence Model: A Framework for Getting Human + AI Teaming Actually Right

https://preview.redd.it/rj2gm24lo7vg1.png?width=2240&format=png&auto=webp&s=3719b13a4afd4ce43d8817f35356c586476bb1d1

We put out a blog this week on structuring Human + AI agent teaming in the contact center. Sharing it here with a bit more context because the framing that didn't make it into the final post is the one I actually want to talk about.

The headline finding: 70% of contact centers use AI. Most of them are still failing on resolution. And the gap is almost never about the AI itself.

Here's what we mean by that.

Organizations have deployed AI the same way they deployed every tool before it: on top of the existing model. Chatbot in front of the queue. Sentiment tool monitoring calls. Knowledge base surfacing on the agent desktop. Each thing made individual tasks faster. None of it changed the fundamental architecture of how interactions actually get handled.

So you end up with a contact center that's simultaneously over-automated and under-resolved.

AI captures the easy stuff. Humans handle everything else. The handoff between them is manual, context-destroying, and frustrating for the customer who has to explain their issue twice.

The organizations we see pulling ahead have stopped thinking about AI and humans as separate channels competing for the same interactions. They're thinking about them as a single, layered system where each does what it actually does best.

We call this the Layered Intelligence Model. Three layers:

Layer 1 - AI resolves it alone.

High-volume, low-complexity, no emotional dimension. Password resets. Order tracking. Basic billing. Well-scoped AI should be hitting 60-90% containment here. This is the grind. Let AI own it.

Layer 2 - Human leads, AI augments.

This is where most contact centers are leaving the most value on the table. The human owns the conversation. AI is running in the background: pulling context, surfacing next-best actions, executing backend workflows so the agent isn't navigating five systems while trying to de-escalate a frustrated customer. The agent focuses on the relationship. The AI handles the systems.

Layer 3 - Human leads, full stop.

Fraud. Medical concerns. High-value retention conversations. Executive escalations. Things that are genuinely novel or legally sensitive. AI does prep work and post-call documentation. Human judgment drives the outcome.

The model sounds simple. And it is, conceptually. The hard part is the honest audit: mapping your actual interaction types against this framework and asking which layer you've been routing them to.

The most common mistake we see: Layer 2 interactions routed to Layer 1. Attempting full automation on interactions that carry emotional weight or business risk. This is what produces the chatbot frustration that makes customers demand a human immediately. High containment. Low resolution. Customers who don't come back.

The Klarna story is the most public version of this. They replaced 700 agents, looked great on efficiency metrics, CEO walked it back publicly by 2025 and started rehiring. His quote: "From a brand perspective, a company perspective, I just think it's so critical that you are clear to your customer that there will always be a human if you want."

Klarna realized they failed because they removed the human layer instead of restructuring it.

One more thing the blog gets into that I think is genuinely underappreciated: as AI handles more routine work, the interactions left for human agents don't just become fewer. They become harder. More emotionally demanding. More consequential. The agents handling the remaining 20-40% of interactions are dealing with the most complex, highest-stakes conversations back to back, all day.

Which means the investment in human agents has to go up, not down. Better real-time support. Better coaching infrastructure. Better tools that actually reduce cognitive load rather than adding to it.

AI for the grind. Humans for the gold. And a system that knows the difference in real time.

Full post + interaction decision table: ujet.cx/blog/how-to-structure-human-ai-agent-teaming-in-your-contact-center

Happy to dig into any part of this. The Layer 2 underinvestment problem is where we see the most variance across orgs - curious what this community's experience has been.

reddit.com
u/ujet-cx — 7 days ago