u/BronsonDunbar

We kept shipping the loudest feature requests until we started scoring them automatically

A painful lesson from product work: the feature request that gets the most Slack messages is not always the one that should get built first.

We spent too long treating feedback like a pile of random notes. Some came from in-app widgets, some from Slack, some from sales calls, and every team had its own favorite request. The result was predictable. Roadmap discussions turned into opinion contests instead of decisions.

What finally helped was forcing everything into one place and giving each request some structure. Instead of just collecting requests, we organized them, let people vote, and added a simple scoring layer so the same input was measured the same way no matter where it came from.

The biggest change was not the tooling, it was the conversation. Once requests were ranked consistently, it became much easier to explain why something was delayed, why another item moved up, and what actually mattered to customers versus internal noise.

I am curious how other founders handle this. Do you trust votes, sales urgency, churn risk, or gut feel when deciding what to build next? And if you have tried to systematize it, what ended up being useful versus just extra process?

reddit.com
u/BronsonDunbar — 20 hours ago

I stopped replying to every mention and started scoring them first

I used to waste a lot of time jumping into every X, Facebook, and Reddit mention that looked remotely relevant. A lot of those replies were just noise, and a few were actually good opportunities I almost missed.

The thing that changed my process was forcing myself to judge conversations against product context before writing anything. If the thread is not a real fit, I skip it. If it is a fit, I draft a reply that is actually useful instead of trying to squeeze in a pitch.

That sounds obvious, but for a small team it made a big difference. I spent less time on random outreach, found better-fit leads faster, and stopped caring so much about vanity engagement.

I built a small side tool around that workflow for myself and a few friends. It helps score conversations first, then you can manually post the reply if it feels worth it. The manual step turned out to matter a lot, because it keeps the replies human and way less spammy.

Curious if anyone else here has a process for deciding which mentions are worth responding to. Do you triage by channel, by intent, or just by gut feel?

reddit.com
u/BronsonDunbar — 20 hours ago

For a while I was opening analytics, Search Console, revenue charts, uptime, and database stats separately every morning, and I still felt behind. The annoying part was not the lack of data, it was the time wasted figuring out which number mattered first.

The biggest lesson for me was that most small SaaS problems are not hidden, they are just spread across too many tools. Traffic drops, SEO slips, a page gets slow, conversions change, or a deploy affects the DB, but by the time you piece it together the day is already gone.

I ended up pulling the basics into one place so I could answer a simpler question: what changed, and what should I do next? That cut out a lot of random tab hopping and made it easier to spot issues before they turned into bigger problems.

What surprised me was how often the answer was not "grow more" but "fix the bottleneck." A landing page slowdown, a bad keyword page, or a query issue can look like a marketing problem at first.

Curious how other micro SaaS founders handle this. Do you prefer one dashboard with a few signals, or do you still like separate tools for each part of the stack?

reddit.com
u/BronsonDunbar — 7 days ago

I kept losing the thread between a planning note, the GitHub issue it turned into, and the final deployment. By the time something shipped, the original context was usually buried across a few apps and a couple of forgotten tabs.

The biggest lesson for me was that productivity breaks down less from a lack of tools and more from a lack of connection between them. If a note, task, and outcome do not stay linked, I spend more time reconstructing context than actually moving work forward.

I started testing a workspace approach where project notes, brainstorms, issue tracking, and reporting all live in one thread. It sounds simple, but the useful part is being able to trace an idea from the first rough note all the way to the shipped result without switching systems.

I am curious how other people handle this. Do you keep planning notes separate from execution on purpose, or have you found a setup that keeps everything connected without becoming too heavy to maintain?

For smaller projects especially, I feel like the real win is not having more features. It is being able to answer, quickly, what was decided, what changed, and what actually shipped.

reddit.com
u/BronsonDunbar — 7 days ago

The biggest mistake I keep seeing in agentic marketing workflows: the ideas never stay connected to the work

I kept losing the thread between a marketing idea, the execution notes, the GitHub issue, and the final reporting. By the time something shipped, nobody remembered why we started it or which experiment actually mattered.

That turned into a useful lesson for me: in agentic marketing, the hard part is not generating more output, it's preserving context as the work moves from brainstorm to action. If the system breaks there, the agent can be fast but still feel disconnected.

What helped most was keeping notes, tasks, imports, and status updates in one continuous thread instead of scattering them across tools. That way the handoff from idea to execution is visible, and the reasoning behind a decision does not disappear halfway through.

I am curious how others here are handling this. Are you keeping campaign planning, AI outputs, and reporting in one place, or are you still stitching it together manually across multiple tools?

For us, the win was less about automation and more about continuity. Once the context stayed attached to the work, it became much easier to decide what to ship, what to kill, and what to iterate on.

Would love to hear what has actually worked for other founders and marketers trying to build agentic workflows without losing the original intent.

reddit.com
u/BronsonDunbar — 8 days ago

I learned the hard way that being busy with product work can hide a simple pricing problem.

For months, I assumed our subscription price was "reasonable" because customers were still signing up. Then I compared our plan structure against a few direct competitors and realized we were undercharging in a way that was easy to miss. We were not just a little cheaper, we were leaving a lot of room on the table.

What surprised me most was that the issue was not only the monthly number. The real gap was in how we bundled features, free limits, and upgrade paths. Once I mapped those side by side, it became obvious that our pricing looked outdated compared to the market.

I am now treating pricing like something that needs regular audits, not a one-time launch decision. Even small changes in packaging can make a bigger difference than adding more features, especially if you are bootstrapping and every rupee of MRR matters.

Curious how others here handle this - do you review competitor pricing manually, use a tool, or mostly go with gut feeling until customers complain?

reddit.com
u/BronsonDunbar — 9 days ago
▲ 6 r/SystemsAndSignals+1 crossposts

A lesson we learned the hard way: if customer feedback lives in five different places, your roadmap will drift fast.

For a while, we were treating feature requests like loose notes from calls, support messages, random Slack threads, and a few emails. Every request sounded important in the moment, but when it came time to choose what to build next, we had no real way to compare them.

The biggest issue was not lack of ideas. It was that we had no consistent signal. A loud request from one customer could look bigger than a quieter problem shared by many others. That made prioritization feel more like politics than product work.

Once we started collecting feedback in one place and scoring it by pattern instead of volume, the roadmap got a lot clearer. We stopped debating every request from scratch and started asking better questions like: how many customers need this, how often does it come up, and what business problem does it actually solve?

I am curious how other founders handle this. Do you have a system for turning scattered feedback into something actionable, or do you still rely mostly on intuition and a few strong customer voices?

reddit.com
u/BronsonDunbar — 10 days ago