u/No_Cryptographer7800

If your monthly software bill is over $1,500, you're probably wasting ~$500 of it. Checklist before your next renewal.

Quick disclosure up top: I build custom tools for a living. Not pitching anything.
Sharing the checklist I run on my own subscriptions because it works the same whether you hire someone or do it yourself.

If your monthly software bill is creeping past $1,500/mo, you're probably wasting roughly a third of it. That's what every audit i've sat through keeps showing, three things stack:

  • 38-44% of paid seats haven't been used in 30+ days (Zylo, Productiv)
  • 30-40% of spend pays for overlapping features (BetterCloud)
  • Roughly two-thirds of small business apps got bought by individual employees on personal cards (Productiv)

Order I'd run it in:

Step 1. Pull every recurring software charge off your card statements. The card statements specifically, because the "tools we use" list is fiction. Almost always 30-50% more on the card than in the doc.

Step 2. For each tool, six questions:

  1. Does someone actually use this every week, or is it the "we might need it" subscription
  2. How many features do we genuinely touch vs. what we're paying for
  3. Is it wired into 1-3 other tools, or 5+
  4. Could a competent dev rebuild the workflow in 4-6 weeks
  5. Do we genuinely need SOC 2 / HIPAA on this specific tool (most small businesses don't)
  6. Have we negotiated the price at renewal in the last 12 months

Answer "no, few, 1-3, yes, no, no" and the subscription is either replaceable or renegotiable.

Step 3. Before doing anything else, call the vendor and threaten to cancel, because guess waht? Dirty secret of SaaS: most vendors will quietly drop the price 40-50% at renewal once you mention you're evaluating alternatives.
Cheapest "replacement" you'll ever do!

Step 4. If you still want to replace, pick the highest-spend tool that flunked the test. One project. Don't try to consolidate five tools at once or you'll be paying for both versions six months in.

Real numbers from things I've replaced:

→ $300/mo outreach tool

Build: $6,500 · Runs at: $74-104/mo · Break-even: month 24

→ $400/mo content + LinkedIn stack

Build: $5,000 · Runs at: $46/mo · Break-even: month 14

→ $1,200/mo CRM + booking + follow-up stack

Build: $14,000 · Runs at: $100/mo · Break-even: month 12-13

Categories where the math almost always works: scheduling, outbound, social posting, starter CRMs, scraping subs, simple dashboards.

Categories where it almost never works: Stripe, email sending, Slack, Notion, anything payroll. Don't fight those, you'll lose.

Worst mistake I keep watching owners make: see the math, get excited, try to kill five tools at once with nobody actually running the project. Six months in they're paying for old and new. One tool. One project. Then move on.

Anyone here actually run this on their own stack lately, what did your waste percentage land at?

reddit.com
u/No_Cryptographer7800 — 3 days ago
▲ 3 r/linkedinautomation+1 crossposts

quick context: I comment on LinkedIn daily for inbound, and every tool I tried was either (a) $40-100/mo SaaS with generic AI slop, (b) an engagement pod that nukes your reach the moment it gets detected, or (c) a Chrome extension that publishes whatever GPT spits out with zero review.

I wanted something that drafts in MY voice, lets me approve every comment before it goes out, and doesn't charge me a per-seat fee on top of the Claude subscription I already pay for. So I built it as a Claude Code skill and put it on GitHub:

https://github.com/dancolta/linkedin-commenter

How it works:

- npm run scan -> scrapes your feed, filters posts (age, length, comment count, dupe authors in last 14 days, keyword/author allow/blocklists), drafts comments via Claude using a voice profile you generate from a 15-question wizard, queues drafts to a Notion DB

- You read drafts in Notion, edit if needed, flip status to approved

- npm run publish -> likes the post, types the comment at human speed via Playwright (35-90ms/keystroke, jitter, scroll delays), archives the row

Guardrails because I don't want my account nuked:

- Daily caps that ramp (5/day first 3 days, 10/day days 4-7, 15+ after)

- 14-day cooldown on the same author

- 3-layer validation: rejects drafts with hashtags, emoji, em-dashes, "great post", "leverage", "synergy", repetitive openers

- Auto-pause if it detects captchas, login redirects, or 3 failures in an hour

- No daemon mode. Only runs when I explicitly call it.

What it's NOT: no headless mode, no DM outreach, no cloud, no managing other people's accounts. Deliberately. Manual approval is the whole point.

Stack: Node 20, TypeScript, Playwright, Notion API, sqlite, claude CLI.

Honest caveats:

- This violates LinkedIn ToS section 8.2 like every other tool in this sub. Use at your own risk, on a throwaway first if you care.

- If LinkedIn changes selectors, the scraper breaks until I (or you, it's open source) patch feed-scraper.ts.

- You need a Claude subscription (Pro is fine) for the drafting.

Free, MIT licensed, PRs welcome. Happy to answer setup questions in the comments.

u/No_Cryptographer7800 — 3 days ago

Built a tool to test a hypothesis: most cold email 'personalization' is fake (job title, company size, recent funding) and that's why reply rates are dogshit. What if you grounded the email in the prospect's actual customer pain?

The setup

Pulled the last 6 months of negative Trustpilot reviews for each target company. Fed the review text to Gemini 2.5 Flash with instructions to cite specific complaints (timing, delivery, service issues). Generated 3 variants per lead (Direct Value / Curiosity Gap / Peer Comparison). Sent via Gmail.

The numbers

- 70 emails sent (single inbox, 30-40/day cap for deliverability)

- 14 replies

- 7 booked meetings

- Cost: ~$0.003/lead (Gemini Flash + skip-path on high-rated companies)

What worked

References to actual review patterns ('I noticed 12 complaints about delivery delays in the last 60 days') landed better than generic angles. People clearly knew this wasn't a templated send.

What didn't

Companies with <5 negative reviews in 6 months → AI cherry-picks weak signals → email loses punch. Adding a min-review threshold next iteration to auto-skip those at $0.

The skip-path

This is the part I'm proud of: before any model call, the tool checks rating + review count and bails free if it's not a good fit. No more burning API budget on 4.5★+ companies that don't have real pain to reference.

What reply rates you're seeing with your current personalization layer, is grounding in customer pain a real lift or just a clever trick?

P.S. the tool I used is open-sourced and free on github, but subreddit rules don't allow me to post the link

reddit.com
u/No_Cryptographer7800 — 17 days ago