u/Aware-Tap-8838

I've been using Claude Code for Apex/LWC work for about 6 months, and I kept seeing the same Salesforce-specific mistakes in generated code. After catching them repeatedly in review, I started building a guardrail layer around it: pre/post-generation agents, required domain knowledge checks, and a few hard rules. Before I overfit this to my own team, I’d really like a sanity check from people who actually ship Salesforce code.

The 5 failure modes I kept seeing most often:

  1. Missing `with sharing`
  2. Business logic directly inside triggers
  3. No FLS / CRUD checks
  4. Test classes with little or no real assertions
  5. No awareness of Order of Execution conflicts with existing Flow / Validation Rule automation

What I’m trying to figure out:

- Are these genuinely common failure modes, or am I overfitting to my team?

- Is bulkification / recursion control the next thing worth enforcing, or do most teams already handle that via trigger framework conventions?

- How strict do you make AI guardrails before they start hurting iteration speed too much?

- If you had to assume a default trigger pattern, what would you pick today: fflib, kevinohara80-style TriggerHandler, homegrown, or no framework?

If helpful, I can share the repo / guardrail setup in the comments, but I left it out here because I’m mainly trying to validate whether the problem framing is right.

reddit.com
u/Aware-Tap-8838 — 16 days ago