u/Justgettingsmart

I’m a Head of Product, and I’m trying to understand how other PMs think about this.

With a normal SaaS product, I can usually look at clicks, funnels, activation events, drop-off, feature usage, and retention to understand what’s working.

With AI agents, that feels harder. A run can complete successfully, but I still may not know if the user got what they needed, trusted the output, or would come back.

The signals I’ve seen teams use are things like thumbs up/down, support tickets, feedback forms, prompt rewrites, copy/export actions, tool calls, usage frequency, and user interviews. But those can be noisy. A rewritten prompt might mean the agent failed, or it might just mean the user was exploring. Low usage might mean the product is weak, or it might just mean the job does not happen often.

For PMs working on agentic products:

How do you tell when an agent actually created user value?

And how do you separate real product issues from normal user behavior/noise?

reddit.com
u/Justgettingsmart — 17 days ago