u/FinanceSenior9771

I read every customer chat on my AI chatbot for 30 days. 73% were the same 12 questions.

i run an ai chatbot for business websites. last month i did something i'd been putting off for almost a year. i exported every conversation from the past 30 days across {N} tenants and read all of them. by hand.

the goal was to answer one specific question: when people talk to a customer-facing chatbot, what do they actually ask?

i'd been telling tenants the standard pitch: ai handles the long tail of customer questions, your support team handles the rare edge cases. it sounded right. i wasn't sure it actually was.

here's what i did:

step 1: pulled every user message from the last 30 days. {N} conversations, around {N} user messages.

step 2: stripped out the throwaway stuff. greetings, "thanks", "ok bye", angry venting, accidental sends. left with the actual questions: around {N}.

step 3: categorized by intent, not by wording. "what's your refund policy" and "can i get my money back" go in the same bucket. "what time do you open" and "are you open today" same bucket.

step 4: counted.

what i found surprised me even though it shouldn't have:

- the top 12 question types covered 73% of all messages

- the top 5 covered 51%

- the top 1 covered 19%

- the long tail (everything outside the top 50) was 11%

the long tail everyone worries about is real but it's small. the head is way bigger than i'd assumed.

the 12 question types, in order, looked roughly like this:

  1. pricing / cost / quotes
  2. hours / availability / location
  3. shipping / delivery times
  4. product specs / does it do X
  5. refund / return policy
  6. account / login issues
  7. how to cancel / pause subscription
  8. how to contact a human
  9. discount / promo / coupon questions
  10. billing / charge questions
  11. integration questions ("does it work with X")
  12. trial / demo requests

what i think this means for anyone running a customer-facing business:

a chatbot trained on 12 well-written canonical answers covers most of your inbound. the rest can route to humans. you don't need a 200-page knowledge base for the bot to be useful. you need 12 short, confident, accurate answers and a fallback that doesn't lie.

second thing, and this is the part i think about now: the questions in your top 12 are also your marketing problems. if 19% of incoming chats are asking about pricing, your pricing page is probably broken. if 8% are asking how to cancel, your cancellation flow is buried. the chatbot data is a product audit.

you don't need a chatbot to do this exercise. pull 100 emails from your support inbox or 100 messages from your contact form. categorize by intent. you'll find your top 12 too. probably less time than you think.

reddit.com
u/FinanceSenior9771 — 6 days ago

i run an ai chatbot product for business websites. one of the features customers pay for is "human handoff": when the bot isn't sure or the user gets frustrated, it would say "connecting you to a human" and they'd... wait. under the hood, the way the feature worked was the system sent an email to the tenant's support inbox and that was it. no actual live chat. no agent appearing in the chat window. just a polite lie.

i knew this was the design from day one. the product positioning was "ai with smart escalation" not "ai with live chat". but users don't read product pages. they read the chat bubble that says "connecting you to a human". they reasonably assume they're about to talk to one.

i noticed because of support tickets from end users (not my customers, the people chatting with my customers' bots) saying things like "where did the human go?" and "i've been waiting 20 minutes for an agent". i was generating support load for my customers because my product was being deceptive.

two options:

  1. build actual live chat. real product work, weeks of effort, fundamentally changes positioning and pricing.

  2. stop lying.

i chose stop lying. three layers of defense:

layer 1, system prompt rule. the bot's instructions explicitly say "never tell the user a human is connecting now or coming online. offer to follow up via email but never imply live chat." this is the ai-side guardrail.

layer 2, tool name and description. the function the bot calls to escalate is named `request_human_followup` not `connect_to_human`. the description literally says "this collects an email so a human can follow up later. not live chat." matters because the model picks tools based on names and descriptions. a tool named `connect_to_human` was implicitly setting the model up to over-promise.

layer 3, handler gate. escalation now requires email capture before it completes. the bot asks "what's the best email to follow up at?" and only after a valid email comes in does the system send the notification. previously the bot would escalate on any frustration signal. now it doesn't escalate without contact info, because escalating without contact info means there's nothing to follow up on anyway.

i rewrote the user-facing message too. "connecting you to a human" became "we'll follow up at {email} as soon as someone is available, usually within {hours}". less exciting. more honest. sets the right expectation.

result: tenant-side support load from "where's my agent?" complaints dropped to basically zero. handoff completion rate (people actually leaving an email) went up because the gate forced it. follow-up-to-conversion rate went up too because leads now had context (full transcript, page url, what the bot tried, where it failed) instead of arriving cold.

the meta-lesson is the part i think about most: friction that's honest beats friction that's hidden. i added a step (email capture) and a slower message ("within hours" instead of "now") and the experience got better because users had accurate expectations. the previous "fast" path was actually slower in practice because users sat there waiting for nothing.

if you're building any kind of ai-with-escalation product, audit your escalation messaging. is your bot promising something the system doesn't deliver? "connecting you" implies a connection. "transferring you" implies a transfer. if the actual mechanism is an email notification, say that. users handle slow-and-honest fine. they don't handle fast-and-fake.

reddit.com
u/FinanceSenior9771 — 16 days ago