u/Ashamed_Listen_1170

🔍 Where does Salesforce work become hardest to trust: docs, config, automation, or ownership?

I’m curious where Salesforce teams still struggle most with confidence when making changes in a real org.

Not asking about certifications or favorite tools. I mean the actual day-to-day work of changing something and not being fully sure what it will break, who owns it, or whether the documentation is even reliable.

Examples:

  • docs say one thing, org behavior says another
  • flows / Apex / permissions / automations interact in ways that are hard to trace
  • multiple teams or consultants touch the same objects
  • a request sounds simple but has hidden dependencies
  • AI helps write faster, but not always understand the system better

For people working in real Salesforce environments:

  • What part of Salesforce work still feels hardest to trust?
  • Where does the uncertainty usually come from?
  • What causes more pain in practice: weak documentation, hidden dependencies, unclear ownership, or release/change risk?
  • What have you seen go wrong because the team thought they understood the system, but didn’t?

More interested in real org experience than product marketing answers.

reddit.com
u/Ashamed_Listen_1170 — 2 days ago

⚠️ 👀 🧠What part of consulting work still depends most on judgment rather than analysis?

I’m curious where consulting work still relies more on human judgment than most people admit.

Not the obvious “do the math” part — more the moments where the data is incomplete, the client context is messy, and the team still has to decide what story to tell or what to recommend.

Examples might be:

  • deciding which insight actually matters
  • knowing whether a recommendation will land with the client
  • choosing what to leave out of the deck
  • figuring out whether the issue is bad analysis, weak narrative, or political reality
  • knowing when more analysis will help vs just delay the decision

Curious from people actually doing the work:

  • What part of consulting still feels most judgment-heavy?
  • What makes it hard to do well?
  • Where do teams still rely on experience / instinct more than they’d like?
  • Has AI changed this at all, or just sped up the parts around it?

Not asking about case interviews or prep — more about real project work.

reddit.com
u/Ashamed_Listen_1170 — 2 days ago

🤔🔁⚖️ What product decision gets re-litigated the most in your team?

One thing I keep noticing is that some product decisions don’t really stay decided.

A team agrees on something, time passes, context gets scattered, and then the same debate comes back:

  • why wasn’t this prioritized?
  • why did we build it this way?
  • who agreed to this tradeoff?
  • what evidence did we have at the time?
  • was this a user problem, a stakeholder push, or just a guess?

Then the PM ends up digging through old Slack threads, notes, tickets, dashboards, or people’s memory just to reconstruct the logic.

I’m curious how this shows up in real teams:

  • What kind of product decision gets re-litigated most often?
  • Why does it keep coming back?
  • Where does the original context usually get lost?
  • How do you handle it today?
  • Is this mostly a documentation problem, a trust problem, or an alignment problem?

Not asking about favorite tools. More interested in the real workflow and where it breaks.

reddit.com
u/Ashamed_Listen_1170 — 2 days ago

⏰🚨 What product truth does PMM usually learn too late?

I’m trying to understand what PMM teams still discover later than they’d like.

Not the obvious things you measure after launch, but the truths that were already there earlier and only became clear once the launch, messaging, or sales motion started missing.

Examples might be:

  • the buyer didn’t understand the category/problem the way we assumed
  • the message was true, but aimed at the wrong moment in the journey
  • sales needed a different story than marketing thought
  • the product experience didn’t fully support the GTM promise
  • the “real” objection wasn’t the one we planned for
  • enablement looked complete, but the field still wasn’t truly ready

I’m curious about the real workflow behind that:

  • What kind of truth does PMM usually learn too late?
  • Where was the signal hiding earlier?
  • What did the team mistake it for at first?
  • How do you try to catch that earlier now?
  • What still feels hardest to know before going live?

Not looking for tool recommendations. More interested in the messy reality of what PMM teams still only learn after the fact.

reddit.com
u/Ashamed_Listen_1170 — 2 days ago

🤔 What part of Customer Success still breaks down even when the team is doing “all the right things”?

I’m trying to understand where CS workflows still fail even when the basics are already in place.

For example, even with things like:

  • onboarding plans
  • weekly check-ins
  • health scores
  • CRM notes
  • support tickets
  • Slack/email follow-up
  • QBRs
  • escalation paths

…it still seems like some problems show up too late, get misread, or keep bouncing between teams.

I’m curious about the real situations behind that.

A few questions:

  • What kind of problem still slips through in your org even when the process looks solid on paper?
  • Where does it usually break first: handoff, context, prioritization, follow-up, customer alignment, internal ownership, or something else?
  • What is the hardest part to detect early: churn risk, onboarding failure, stakeholder misalignment, hidden blockers, product confusion, weak adoption, or something else?
  • What problem takes the most manual effort to understand properly?
  • What’s one example where your team did “everything right” and it still went sideways?

Not asking for tool recommendations.
More interested in the messy reality: what still feels unresolved, and why.

reddit.com
u/Ashamed_Listen_1170 — 2 days ago

👀 After a launch underperforms, how does your team actually figure out why?

I keep hearing that the answer usually doesn’t live in one place.

Some of it shows up in:

- sales calls

- win/loss notes

- support tickets

- customer success conversations

- product usage

- scattered internal feedback

Curious what the actual workflow looks like for PMM teams:

- who owns connecting those signals?

- which signals do you trust most?

- where does the truth usually get lost?

- how long does it take before you feel confident about the real reason?

More interested in the real process than tool recommendations.

reddit.com
u/Ashamed_Listen_1170 — 4 days ago

🚀 What do PMMs still have to guess before a launch?

I’m curious where product marketing teams still rely more on intuition, scattered feedback, or post-launch learning than they’d like. Examples:

  • whether the message will actually land
  • whether buyers will understand the product quickly
  • whether a launch will create clarity or confusion
  • whether the product experience matches the story told in GTM
  • whether the “real” objections will show up only after launch Not asking about favorite tools.

More interested in:

  1. what still feels uncertain before launch

  2. how your team tries to reduce that uncertainty today

  3. what usually gets missed until after launch

What’s the hardest thing to know before you go live?

reddit.com
u/Ashamed_Listen_1170 — 4 days ago

🧩When a funnel drops, how do you get from “where” to “why”?

I’m curious how teams actually handle this in practice.

A lot of teams can identify where drop-off happens in a funnel, but the harder part seems to be figuring out why with enough confidence to act.

I’m not asking about favorite tools. I’m more interested in the real workflow after a drop shows up.

For example:

  • What’s the first step after someone notices the drop?
  • Do you first verify whether the signal is even real (tracking, segment changes, traffic shifts, release issues)?
  • Who usually owns the investigation: analytics, PM, growth, marketing, or whoever has bandwidth?
  • What signals do you trust most after the dashboard: session replays, support tickets, surveys, interviews, experiments, something else?
  • How long does it usually take before your team feels confident enough to act?
  • Where does the process usually break: lack of data, scattered data, unclear ownership, or weak follow-through?

I keep seeing teams that can find the leaky step, but still struggle to turn scattered signals into a decision people trust.

Would love concrete examples, including messy ones.

reddit.com
u/Ashamed_Listen_1170 — 4 days ago