We just wrapped a $1.2M enterprise AI deployment — here's what nobody tells you about sizing and phasing these builds
Spent the last 14 months on a $1.2M enterprise AI program for a US mid-market client workflow automation, predictive analytics, and decision-support layered into their core ops.
Wanted to share what I learned because most of the "how to scope an AI project" advice out there is either fluff or written by people who've never actually shipped one.
Some context first: the buy-side massively underestimates how much of a $1M+ build is NOT the model.
People hear "AI deployment" and picture data scientists tuning hyperparameters. The actual cost breakdown on our $1.2M program looked like this:
- Data engineering and integration: ~35%
- Compliance, security review, audit prep: ~20%
- Model dev and MLOps: ~20%
- Change management + user training: ~15%
- Contingency and rework: ~10%
The compliance line is what surprises everyone. If you're touching anything regulated in the US financial services, healthcare, employment decisions, anything that trips state-level AI disclosure laws now rolling out expect 15-25% of your budget to go to legal review, model documentation, bias testing, and audit trail tooling.
Skip it and your go-live gets pushed by months when InfoSec finally reviews the system.
On phasing the single biggest mistake I see is trying to ship workflow auto + predictive + decision-support in one big bang.
Don't. Here's the phasing that actually worked:
Phase 1 (months 1-4): Workflow automation only. RPA-style stuff, no ML. Get the data flowing through clean pipelines. This phase pays for itself and builds organizational trust.
Phase 2 (months 5-9): Layer in predictive analytics. Now you have clean data because Phase 1 forced you to fix it. Models train faster and perform better.
Phase 3 (months 10-14): Decision-support / recommendations. Highest-risk piece because it puts AI output in front of humans making real decisions.
By now you have user trust, model performance data, and the compliance scaffolding to back it.
Trying to do all three at once means your data quality issues in Phase 1 cascade into bad model performance in Phase 2, which torpedoes user adoption in Phase 3. Done that, won't again.
Sizing rule of thumb that's held up across our last few engagements: if a vendor or internal team quotes you under $400K for a real enterprise-wide AI deployment (not a chatbot pilot), they're either undercounting compliance/integration or you're not actually deploying enterprise-wide.
Real $1M+ builds almost always end up between $1.1M and $1.8M when you include the things vendors leave out of the SOW.
A few other things nobody talks about:
The "we'll figure out monitoring later" trap. Model drift will hit you in month 8 of production and if you don't have monitoring infra from day one you're flying blind. Budget $50-100K for observability tooling and don't cut it.
Buy vs build for the orchestration layer. We built ours. Wouldn't again. Off-the-shelf tools have caught up enough that custom orchestration only makes sense if you have very specific compliance requirements vendors can't meet.
The CFO question. Every program over $1M will have a moment around month 6-8 where someone in finance asks "are we sure this is worth it?" Have your ROI tracking baked in from day one cycle time reductions, error rate drops, hours saved per role.
We tracked from week 2 and had a clean story when the question came.
The other underrated piece: change management.
We spent $180K on training, comms, and adoption work. Felt expensive at the time. In hindsight it's the only reason the decision-support module didn't get quietly ignored by the people who were supposed to use it.
AI that nobody actually uses is the most expensive failure mode in this space and it's almost always preventable with proper rollout work.
Happy to answer questions if anyone's mid-build or scoping one. The biggest favor I can do for anyone reading this: budget for the unsexy stuff.
The data work, the compliance work, the change management. That's where $1M+ programs succeed or die. The model itself is almost always the easy part.
Share