u/darterweb

▲ 4 r/AIWritingHub+1 crossposts

The 5th pattern I cut from my AI ebook post (and why I shouldn't have)

Last week I posted "4 prose patterns that betray an AI draft" here (https://www.reddit.com/r/WritingWithAI/comments/1t583ro/4\_prose\_patterns\_that\_betray\_an\_ai\_draft\_and\_the/). It went further than I expected: thanks to everyone who commented, especially the conversation about "hollow buzzwords" (tapestry, delve, navigate, leverage).

That comment changed how I think about pattern detection. I had originally drafted FIVE patterns. I cut the 5th because the post was getting too long. After re-reading the comments, I think I cut the wrong one. Here it is:

**Pattern 5: The closing that summarizes instead of landing.**

AI drafts almost always end chapters (and books) with a recap. "In this chapter, we explored..." or "As we've seen..." It feels like the model is trying to prove it understood its own argument.

Real writing doesn't do this... or not always at least. Real writing trusts the reader to remember what they just read. It ends on the strongest sentence, not on a meta-commentary about what the chapter contained. Unless you're writing very technical stuff (a research paper?).

**Why AI defaults to it:** the model treats every chapter like an essay with a required conclusion paragraph. Most non-fiction writing in the training data has this structure (textbooks, blog posts, academic articles), so AI replicates it.

**The fix:** read the last paragraph of every chapter. If it starts with "in this chapter," "to summarize," "as we've discussed," or any reference to the chapter being a chapter — delete the entire paragraph. The chapter ends one paragraph earlier than you think.

I tested this on three drafts. In every case, the chapter was stronger without the recap. The reader doesn't need to be reminded what they just read 30 seconds ago.

**Bonus pattern from the comments last week:**

u/Overall-Fishing-8598 pointed out the "hollow buzzwords" trap (tapestry, delve, navigate, leverage, etc). I'd add: every time you use one of those words, ask yourself what concrete thing or action it's standing in for.

"Navigate the challenges of X" → "Decide what to do when X happens." "Leverage your skills" → "Use what you already know." Specificity kills AI tone faster than any other edit.

If anyone wants the full editing checklist (now 5 patterns + the buzzword fix), I added it to the free guide.

What other patterns have you noticed in your own AI drafts? Curious what I'm still missing.

reddit.com
u/darterweb — 3 days ago

Hi! I've been building scribora.ai (notes → ebook tool) for the last month and half.

Tuesday night I wrote a thread on X breaking down 4 patterns that make AI-written ebooks unreadable. I was actually pretty proud of it, and I posted at 11pm. Result: 7 views.

Wednesday morning I wrote a slightly extended version the same content into a Reddit post on r/WritingWithAI. Different format (single long post instead of threaded), but basically same words and content.

48 hours later:

- 16K views

- 26 upvotes (88% ratio)

- 102 shares

- 12 comments, including one from a writer who built their own framework around the same problem

- 26 people signed up for the free editing checklist

The wild part: I almost didn't post on Reddit because "X is where indie hackers are.", so I felt I should post there. However, the actual indie hackers might be on X, but my actual users — people who write — are on Reddit (and maybe somewhere else?), looking for solutions to specific writing problems. I had been talking to the wrong room for weeks.

A few things that I *think* made the Reddit post work:

→ Posted in a subreddit where my target user was already actively complaining about the problem I solve. r/WritingWithAI has people every week posting "why does my AI draft sound generic?" My post answered that question explicitly.

→ The post was 100% useful content. The product was mentioned in one comment, only when someone asked. The 26 signups came from people who clicked after that comment or from my profile to see who wrote it.

→ Replied to every comment in the first 6 hours. Not sure how useful that was, but let me believe it was 😃 .

→ The shares (102) outperformed the upvotes (26) by 4x. That ratio surprised me. It suggests the post worked as something people sent to friends ("hey, this is what I was telling you about") more than something they wanted to publicly endorse with an upvote.

I think one of the best things we do in this sub-reddit is trying to come up with lessons learned and take home messages. Here's mine: if you're build-in-public and your X numbers are flat: don't conclude that your content is bad. Try the same content somewhere your users actually live. Could be Reddit, could be a Discord, could be a niche forum. The mismatch between "where founders hang out" and "where my customers hang out" was the lesson of the week for me.

Anyone else seen huge gaps between platforms for the same content? Curious if this is a pattern or a one-off.

reddit.com
u/darterweb — 7 days ago
▲ 35 r/BookWritingAI+2 crossposts

Been writing/editing AI-assisted ebooks for the past year, both my own and through a tool I built for people to use. After enough drafts I started noticing the same 4 patterns that scream "this was generated" even when the source material is solid. Curious if any of these match what you're seeing.

1. Fake-insight lists. "Here are 5 ways to..." or "There are 3 main reasons..." even when there's no real list to make. Items end up being parallel restatements of the same point. The fix that worked for me: if I can't argue why each list item matters separately, I kill the list and write a single paragraph.

2. Throat-clearing openings. Every chapter starts with "In today's fast-paced world..." or "Imagine a world where..." Just warm-up. Almost universally the 3rd paragraph is where the actual point lands, while the first two are scaffolding the AI built and forgot to remove. I cut them by default now.

3. No tension. This is the subtlest one. AI presents arguments without setting up the counter-position first. It just states things. Reads like a Wikipedia summary. Real writing opens a section with what the reader currently believes that's about to be challenged, and AI rarely does this unless you explicitly prompt for it (and even then it does it weakly).

4. Editing pass that adds instead of cuts. This one bit me hard. "Polish this draft" prompts make Claude/GPT add qualifiers, transitions, softening words. Overall, the prose comes back longer, not better. So I made a rule: my editing prompts can only ask for cuts or substitutions, never additions. The drafts got 30-40% shorter and dramatically tighter.

The interesting one is #3: hardest to teach the AI to do, easiest to fix in editing once you know what you're looking for. I have a 1-sentence positioning brief I write before drafting that pre-loads the tension into every chapter; that fixed maybe 60-70% of the issue upstream.

I wrote up the full version of this as a 30-page free guide if anyone is interested, btw.

Two things I'm curious about and would love takes on:

  1. AI metaphors: keep or always cut? I've gone back and forth. Right now I cut maybe 80% but every once in a while one is actually good and I can't find a rule for which.
  2. What's the failure pattern you keep seeing that isn't on my list? The four above are mine but I'm pretty sure I'm missing some.
reddit.com
u/darterweb — 7 days ago