[ Removed by Reddit ]
[ Removed by Reddit on account of violating the content policy. ]
[ Removed by Reddit on account of violating the content policy. ]
This is an expensive pattern startups follow.
Year one: founder does outreach manually, LinkedIn messages, cold emails, a spreadsheet of prospects. tells themselves they'll build a proper system when things pick up. things pick up but the system never gets built
Year two: the spreadsheet is a mess, follow ups are slipping, good leads are going cold because nobody caught them at the right moment. they hire an SDR to fix it
Year three: the SDR is busy but the pipeline quality is still inconsistent, some leads convert, most don't, nobody really knows why. so they hire a sales consultant or a head of sales to figure out what's broken
that person spends the first few months not doing what they were hired for. they're trying to understand why outreach isn't converting when the real problem was never the outreach itself
there's a difference between someone who is vaguely aware of your product and someone who is actively feeling the pain you solve right now, most founders are spending all their time and money on the first group and wondering why nothing is closing
the sequence matters
you need to know who has intent before you spend anything on reaching them. an SDR without intent signals is just sending volume into a void. a sales consultant without clean targeting data is just expensive opinions
most early stage founders don't need a bigger sales team, they need to know which people in their market are already raising their hand right now
so genuinely curious, what are people actually using to figure out who has real buying intent before they reach out
because the tool that changes everything isn't the impressive sounding one, it's the boring one that just makes sure you're talking to the right person at the right moment
Has no one figured out a tool for this?
This is an expensive pattern startups follow.
Year one: founder does outreach manually, LinkedIn messages, cold emails, a spreadsheet of prospects. tells themselves they'll build a proper system when things pick up. things pick up but the system never gets built
Year two: the spreadsheet is a mess, follow ups are slipping, good leads are going cold because nobody caught them at the right moment. they hire an SDR to fix it
Year three: the SDR is busy but the pipeline quality is still inconsistent, some leads convert, most don't, nobody really knows why. so they hire a sales consultant or a head of sales to figure out what's broken
that person spends the first few months not doing what they were hired for. they're trying to understand why outreach isn't converting when the real problem was never the outreach itself
there's a difference between someone who is vaguely aware of your product and someone who is actively feeling the pain you solve right now, most founders are spending all their time and money on the first group and wondering why nothing is closing
the sequence matters
you need to know who has intent before you spend anything on reaching them. an SDR without intent signals is just sending volume into a void. a sales consultant without clean targeting data is just expensive opinions
most early stage founders don't need a bigger sales team, they need to know which people in their market are already raising their hand right now
so genuinely curious, what are people actually using to figure out who has real buying intent before they reach out
because the tool that changes everything isn't the impressive sounding one, it's the boring one that just makes sure you're talking to the right person at the right moment
Has no one figured out a tool for this?
This is an expensive pattern startups follow.
Year one: founder does outreach manually, LinkedIn messages, cold emails, a spreadsheet of prospects. tells themselves they'll build a proper system when things pick up. things pick up but the system never gets built
Year two: the spreadsheet is a mess, follow ups are slipping, good leads are going cold because nobody caught them at the right moment. they hire an SDR to fix it
Year three: the SDR is busy but the pipeline quality is still inconsistent, some leads convert, most don't, nobody really knows why. so they hire a sales consultant or a head of sales to figure out what's broken
that person spends the first few months not doing what they were hired for. they're trying to understand why outreach isn't converting when the real problem was never the outreach itself
there's a difference between someone who is vaguely aware of your product and someone who is actively feeling the pain you solve right now, most founders are spending all their time and money on the first group and wondering why nothing is closing
the sequence matters
you need to know who has intent before you spend anything on reaching them. an SDR without intent signals is just sending volume into a void. a sales consultant without clean targeting data is just expensive opinions
most early stage founders don't need a bigger sales team, they need to know which people in their market are already raising their hand right now
so genuinely curious, what are people actually using to figure out who has real buying intent before they reach out
because the tool that changes everything isn't the impressive sounding one, it's the boring one that just makes sure you're talking to the right person at the right moment
Has no one figured out a tool for this?
I'm a dev and they asked me to cover QA cause our QA quit, for a few weeks because apparently I have a habit of finding bugs
I said sure, bring it on!, how hard can it be?
Features coming in with no error handling, no input validation, not even close to the design specs. I write up detailed feedback cards, screenshots, screen recordings, the works, they come back "fixed". half the issues are still there and there are three new ones.
I'm the reason tickets aren't shipping
I've been a dev for years and I genuinely cannot explain how you look at a design, build something that doesn't match it at all, and then send it for testing with full confidence
but the part that really upsets me is the social engineering, publicly framing me as the bottleneck because I keep failing their tickets as if the tickets are failing because I'm being difficult and not because the features aren't finished
I thought this team was solid
QA people I owe you an apology. I had no idea about this
On paper it was a great role, good company, good team, product I actually believed in, manager who genuinely cared. I got promoted after about a year which felt like validation that I was doing something right
two months after the promotion I handed in my notice
my manager asked me why in the exit interview and I told him the truth even though it was uncomfortable
I didn't get into QA to spend my days fixing broken test scripts
That's what the job had become, developer pushes a change, something in the selector based automation breaks, I spend the morning figuring out which failures are real bugs and which ones are failing because someone renamed a CSS class or moved a component two pixels to the left, afternoon I'm updating scripts, next day same thing
Somewhere in between I was supposed to be actually thinking about quality, finding the edge cases nobody thought of understanding the product deeply enough to know when something felt off before it became a bug report from a customer
The automation was supposed to free me up to think, instead it had become the entire job and the promotion just meant I was now responsible for a larger version of the same problem with less time to do anything else
I'd fix twenty broken selectors on Monday and by Wednesday half of them would be broken again because someone had done a perfectly reasonable refactor, nothing I was doing was making the product better. I was just keeping a system alive that existed to keep itself alive
I sat with a friend who also does QA and asked her how much of her week was spent maintaining tests versus actually testing things
she laughed at me lol
I became a QA engineer because I genuinely care about quality. I like finding the thing nobody thought to look for, I like understanding a product well enough to know where it's likely to break before it does, that's the job I signed up for
what I was actually doing was being a janitor for a suite of brittle scripts that nobody had time to rethink because everyone was too busy keeping it running
I don't regret leaving.
I think a lot of founders are accidentally treating LinkedIn like a scoreboard.
They post, get likes, feel like distribution is working, and move on. But the whole point of the post was never the likes. It was the people behind them.
If the right person engages with your content and nobody follows up, that’s not a content win. That’s a missed lead.
What makes this worse is that the signal is already there. You don’t need a new channel. You don’t need a bigger budget. You just need a better process for noticing who actually engaged and deciding whether they belong in pipeline.
Most teams are very good at measuring impressions and very bad at measuring intent. So they end up celebrating visibility while ignoring the people who were closest to converting.
I get why it happens. It’s easier to report reach than to operationalize follow-up. But if your content is getting engagement from the right accounts and none of that turns into conversations, you’re not really building pipeline. You’re collecting applause.
QA got domesticated. And most dev teams did it deliberately.
A QA engineer pushes back on a design decision in standup “this flow doesn’t make sense for the user” and the PM gives the look that means not your job. A tester flags a bug two hours before release and instead of a thank you gets a twenty minute conversation about why it’s coming up now. A regression gets caught and the post-mortem asks why QA didn’t find it sooner, in a tone that makes clear they shouldn’t have been looking that hard in the first place.
QA adapts. They learn. They stop challenging. They stop asking why. They run the scripts, log the tickets, retest the fixes. They become extremely efficient at a job that stopped being QA about eighteen months ago.
Then developers complain that QA doesn’t add value anymore.
The audacity is staggering.
You want a QA partner who catches design flaws before the code is written, challenges product assumptions, tells you your edge case handling is a disaster waiting to happen? That person exists. You just spent two years making it professionally unsafe for them to do any of those things and now you’re surprised they stopped.
Real QA is uncomfortable. It’s supposed to be. Someone whose job is to find every way your work is wrong, before users find it for you, is not supposed to feel like a smooth part of the process. The moment QA stops creating friction is the moment it becomes decoration.
Most teams don’t have a QA problem. They have a culture that selected against good QA so gradually nobody noticed until it was gone.
Going to say something that will annoy a lot of people in this thread.
Most early stage founders don’t have a lead gen problem. They have a clarity problem they’re using lead gen to avoid. And the entire GTM tooling industry is built to help them avoid it more efficiently.
Apollo, Clay, Instantly, Smartlead, Traxy are all great tools. They’re also the best way ever invented to scale a message nobody wants to hear to thousands of people who don’t have the problem badly enough to care.
I’ve watched teams run sequences to 3,000 contacts and get 4 calls. Then manually identify 40 companies with a very specific trigger, new funding, a relevant new hire, a job post revealing an active pain and get 11 calls from those 40. The difference wasn’t the tool. It wasn’t the copy. It was that they actually knew why those 40 people should care right now.
“Do things that don’t scale” is repeated constantly and implemented almost universally wrong. It doesn’t mean send cold emails manually. It means get so specific about who you’re targeting that you couldn’t automate it even if you wanted to.
Most founders skip that step entirely and go straight to automation because specificity is hard and sequences feel like progress.
The 40-50 leads you have with rough conversion do you actually know why they didn’t buy? Not your assumptions. Did you ask them directly and sit with the answer? Because that conversation is worth more than your next 500 contacts. Most founders don’t do it because the answer is uncomfortable.
if conversion is rough across every channel, every message variant, every persona that is not a funnel problem. That is the market telling you something you don’t want to hear. And no sequence optimisation fixes that. It just delays the moment you have to hear it.
The teams that hit 100 customers fastest were embarrassingly specific embarrassingly early. They talked to 20 of those people before touching any tooling.
Most early GTM is just expensive avoidance dressed up as execution.
The deal was basically closed. Procurement was involved. We’d done three calls.
The final demo was in their boardroom. Their IT guy pulled out a Samsung Galaxy Tab A. Nobody on our team owned one. Nobody had tested on one. Our UI was built on an iPad-first assumption and on their tablet the primary action button was sitting behind the system navigation bar.
You couldn’t tap it. Just visually gone. Half an inch cut off at the bottom.
The client was polite about it. Said they’d circle back. They didn’t.
We found out later they went with a competitor. I don’t know if that was the reason but I know it was in the room when they made the decision.
The fix was 3 lines of code. Proper inset handling for Android navigation bars. We just never tested on a device where it mattered because everyone on the team had flagships with gesture navigation.
Their tablet had 3-button nav. Different inset entirely.
I’m not even angry about it anymore. I’m just precise about device testing now in a way I wasn’t before that meeting.
here's the hiring sequence i've watched play out at probably a dozen startups:
year one: founder does the books in a spreadsheet. it's fine. they tell themselves they'll clean it up later.
year two: the spreadsheet is a disaster. they hire an accountant to file taxes once a year. feel like they've solved it.
year three: investor asks for monthly financials. founder realizes they don't actually know their burn rate, their gross margin, or what's in accounts receivable. they panic-hire a fractional CFO.
the fractional CFO spends the first three months not doing CFO work. they're cleaning up two years of bad bookkeeping because none of the foundational work was done right. you're paying CFO rates for bookkeeper work because you skipped the bookkeeper.
the roles are actually distinct and the order matters:
a bookkeeper keeps the records clean in real time. categorizes transactions, reconciles accounts, makes sure the data going in is accurate. this is not glamorous. this is also the foundation everything else depends on.
an accountant interprets the records. taxes, compliance, financial statements, strategic advice on structure. they need clean books to do this well. if your books are a mess your accountant is spending half their time being a bookkeeper and billing you accordingly.
a CFO makes decisions with the financial data. fundraising strategy, runway modeling, unit economics, investor relations. they need accurate books and competent accounting under them. a CFO with no clean data is just an expensive person with opinions.
the controversial part: most early stage startups do not need an accountant yet. they definitely don't need a CFO. they need someone to keep their books clean for $300-500 a month so that when they do need an accountant, the accountant can actually do accountant work.
we actually just switched to traxy for this at our company. it's not a full accounting solution but that's kind of the point it keeps the day to day stuff clean so when our accountant shows up there's actually something coherent to work with. been cheaper than the part-time bookkeeper we had before and the books are in better shape.
the irony is the tool that's helped most isn't the impressive-sounding one. it's the boring one that just makes sure the data going in is correct.the reason founders skip the bookkeeper: it feels too small. too administrative. not strategic enough. so they either do it themselves badly or jump straight to hiring someone impressive-sounding who ends up doing the unglamorous work anyway at five times the cost.
your books are either clean or they're not. no amount of CFO credibility fixes data that was never recorded correctly in the first place.
found a critical bug two hours before a release last month. payment flow, specific device and OS combo, would have hit maybe 15% of users. caught it. nothing shipped broken.
the response was not thank you. it was "why didn't we know about this sooner."
i've thought about this a lot. the bug was written by a developer. it lived in the codebase for two weeks. QA found it in the last two hours before it went to users. and somehow the person in trouble is QA.
this is completely backwards and teams don't notice because it's so normalized.
here's what actually happens when a bug ships to production: it's called a "production issue." not a developer issue. not a code quality issue. a production issue. the language is passive. stuff just happens in production sometimes. bad luck. hot fix incoming. post-mortem scheduled.
but when QA holds a release: someone missed this. why wasn't this in earlier testing. who signed off on this.
the accountability only becomes personal when QA is involved. when it slips through to users it becomes a systems problem. the asymmetry is wild once you see it.
the accountability asymmetry is the part that messes with me most. A bug ships to production and it becomes a ‘production issue’ passive, systemic, nobody’s fault. QA flags something before release and suddenly it’s personal. ‘Why didn’t we catch this sooner?’ gets directed at a person, not a process.
The incentive this creates is genuinely dangerous. It quietly trains QA teams to underreport confidence so they’re never caught holding a win that gets reframed as a near-miss.
I’ve been thinking about this more since we started running pre-release flow checks with drizz having an audit trail of what was actually tested and when changes how the conversation goes. The bug was introduced on day 8. It was caught on day 14. That’s not a QA failure, that’s the system working. The language around it needs to match
i don't think most developers are doing this consciously. i think the incentives are just completely broken. shipping feels like progress. holding feels like failure. and QA is the one calling for the hold so QA gets the association.
the "why didn't we catch this sooner" response to a pre-release catch is honestly one of the most demoralizing things you can say to a QA team. you caught it. that's the job. that's a win. treating it as a near-miss rather than a save is how you train people to stop looking hard.
half my traffic is mobile. these are people holding their phone one-handed, probably doing something else at the same time, with a thumb that can reach about 60% of the screen comfortably.
they land on a product page. they’re interested. they hit one question.
the store says “contact us” or opens a chat widget that requires them to type a paragraph and wait for a reply.
they close the tab.
the patience window on mobile is genuinely tiny. people aren’t sitting at a desk carefully evaluating options. they’re in a moment of interest and that moment passes fast.
the stores that convert on mobile well tend to have figured out how to eliminate friction at the exact point where a buyer needs reassurance. not through better design. through actual responsiveness.
i think the next meaningful unlock in mobile conversion isn’t another ui improvement. it’s stores that can actually respond to what someone needs in the moment they need it.
we put enormous effort into product pages. copy, photography, layout, social proof. and it all works great right up until the buyer has a question.
then it just stops. they either fill out a contact form and wait, or they leave.
the thing that gets me is we know conversion happens in a window. there’s a moment when someone is genuinely considering buying and they need one thing resolved. if you’re there for that moment, they buy. if you’re not, they close the tab and move on with their day.
agentic commerce is interesting to me specifically because it’s not about adding another feature. it’s about whether the store can actually participate in that moment or just sit there looking nice while the buyer decides alone.
the original idea was to reduce support tickets. answer the repetitive stuff automatically. free up time.
that part worked fine.
what i didn’t expect was that a chunk of those “support” conversations were happening before the purchase. people asking questions they needed answered to feel confident buying. shipping timelines. compatibility. whether something would work for their specific situation.
those aren’t support conversations. those are the last 10 seconds of a sale.
we’ve been using superU for this. it started as a support thing and ended up becoming the part of the funnel that was previously just silence.
i wouldn’t have predicted that when i started. the product is the same. what changed is the store can now actually respond.
we tweaked copy. tested new layouts. a/b tested the button color. the whole thing. spent real time on it.
barely moved.
it took me an embarrassingly long time to realize the problem wasn’t the page. the problem was that people had questions and the page couldn’t answer them. they weren’t leaving because the design was bad. they were leaving because nobody was there.
that’s a fundamentally different problem than a conversion rate problem. you can’t fix it with copy.
we started thinking about it as a response problem instead. put an AI voice layer on the store using superU to catch those “wait, one more thing before i buy” moments. it’s messy and not fully figured out but the conversations it’s having are exactly the ones we were losing before.
the thing that still sits with me is how long we optimized for the wrong thing. the page was fine. the store just couldn’t talk back.
this is embarrassing to admit, but i added chat to our store thinking it would fix support.
it didn’t.
people on mobile with one quick question? they type half a sentence, get distracted, and bounce. chat just adds another step.
voice actually feels faster for that moment — like talking to someone who can answer right away.
we’ve been playing with superU for agentic commerce stuff like this. it’s not perfect but it cuts the hesitation better than typing.
[ Removed by Reddit on account of violating the content policy. ]
SoundHound acquired LivePerson for $250 million.
xAI launched standalone voice APIs with pricing so aggressive it went directly at ElevenLabs and Deepgram.
Google shipped Gemini 3.1 Flash TTS and topped the entire Artificial Analysis leaderboard at 1,211 ELO.
Phonely raised $16M Series A.
Cloudflare shipped voice on Workers moving toward production.
All of this in one week.
When Google, xAI, OpenAI and Cloudflare all move on the same layer in the same week, that layer is not the opportunity anymore. That is the hyperscalers announcing that infrastructure is now a commodity. Cheap, fast, available everywhere, margin compressed to zero.
This is exactly what happened to cloud storage. To compute. To databases. The moment AWS made S3 cheap, the companies whose only product was "we store your files" ceased to exist. The value moved up the stack to the companies that did something meaningful with the files.
Voice just had its S3 moment.
The API wrapper companies, the ones whose pitch is essentially "we make STT and TTS slightly easier to use," are not going to say this out loud. But their Series A decks just got a lot harder to write.
What this actually creates is a wide open lane.
Not for more infrastructure. For use cases. Workflows. Industries. The specific problems that raw voice infrastructure cannot solve by itself.
Think about what that means in practice.
A D2C brand does not have a voice infrastructure problem. They have a cart abandonment problem. A COD confirmation problem. A post-purchase retention problem. The infrastructure to solve those problems just became cheap and available to anyone.
The companies that win from here are not the ones with the best latency benchmarks.
They are the ones who understood a specific customer's problem deeply enough to build a workflow that actually solves it.
That is the playbook. Every time a layer commoditises, the value moves up. Every time hyperscalers enter, the indie companies that survive are the ones who went vertical instead of trying to compete horizontal.
The infrastructure wars are for Google and xAI.
The use case wars just opened up.
And most people are still arguing about latency benchmarks.
SoundHound acquired LivePerson for $250 million.
xAI launched standalone voice APIs with pricing so aggressive it went directly at ElevenLabs and Deepgram.
Google shipped Gemini 3.1 Flash TTS and topped the entire Artificial Analysis leaderboard at 1,211 ELO.
Phonely raised $16M Series A.
Cloudflare shipped voice on Workers moving toward production.
All of this in one week.
When Google, xAI, OpenAI and Cloudflare all move on the same layer in the same week, that layer is not the opportunity anymore. That is the hyperscalers announcing that infrastructure is now a commodity. Cheap, fast, available everywhere, margin compressed to zero.
This is exactly what happened to cloud storage. To compute. To databases. The moment AWS made S3 cheap, the companies whose only product was "we store your files" ceased to exist. The value moved up the stack to the companies that did something meaningful with the files.
Voice just had its S3 moment.
The API wrapper companies, the ones whose pitch is essentially "we make STT and TTS slightly easier to use," are not going to say this out loud. But their Series A decks just got a lot harder to write.
What this actually creates is a wide open lane.
Not for more infrastructure. For use cases. Workflows. Industries. The specific problems that raw voice infrastructure cannot solve by itself.
Think about what that means in practice.
A D2C brand does not have a voice infrastructure problem. They have a cart abandonment problem. A COD confirmation problem. A post-purchase retention problem. The infrastructure to solve those problems just became cheap and available to anyone.
The companies that win from here are not the ones with the best latency benchmarks.
They are the ones who understood a specific customer's problem deeply enough to build a workflow that actually solves it.
That is the playbook. Every time a layer commoditises, the value moves up. Every time hyperscalers enter, the indie companies that survive are the ones who went vertical instead of trying to compete horizontal.
The infrastructure wars are for Google and xAI.
The use case wars just opened up.
And most people are still arguing about latency benchmarks.