If your automation needs babysitting, it isn't automation
A workflow that "works" but still needs you checking it every 20 minutes isn't really automated.
It's just a new job where the human role is: babysit the system.
I think this is where a lot of automation projects quietly go sideways. People measure success by:
- did it run
- did it produce output
- did it avoid crashing
But the real question is: can I stop thinking about it long enough that it actually removes work?
The automations I trust most tend to be the least fancy. They have:
- tight scope
- predictable inputs
- clear fallback rules when something's off
- an obvious kill switch
- logs that make debugging fast
The ones that create the most stress are usually the "smartest" ones. They handle a lot... until they hit a weird edge case, and suddenly you're monitoring them like a nervous intern on their first day.
I've also noticed people blame the model or the logic when the real problem is the environment. Expired sessions, missing fields, API timeouts, duplicate submissions, weird input formats. That stuff doesn't show up in demos but it shows up constantly in production.
For me, an automation earns trust when:
- the cost of a bad action is bounded
- it fails safely, not silently
- exceptions route somewhere useful instead of disappearing
- I'm not forced to babysit it to catch mistakes
The "boring but reliable" build almost always outlasts the "impressive but fragile" one.
Curious where other people draw the line. At what point does an automation go from "cool demo" to something you'd actually trust running unsupervised?