u/heizen_91

▲ 2 r/procurement+1 crossposts

AI-driven sustainability" is in every supply chain deck right now. The math is quietly falling apart.

For the last 18 months, "sustainable AI" has shown up in nearly every supply chain pitch deck circulating in the enterprise market. The argument is clean: AI ingests supplier data, models emissions, surfaces hot spots, automates decarbonization. The chart goes up and to the right. The CSO sleeps better. Procurement gets a dashboard.

The argument is also quietly falling apart in operations. Worth being honest about it before the next budget cycle.

A few numbers that don't reconcile:

  • Scope 3 emissions account for ~80% of the typical company's footprint. Only ~10% of companies measure them with audit-grade accuracy (MIT Sloan; EcoVadis 2026).
  • AI-focused operations are projected to draw close to 90 TWh of electricity in 2026 — nearly a 10x jump from 2022 (WEF, Feb 2026).
  • A February 2026 industry review found 74% of AI-climate benefit claims could not be substantiated.

Supply chain leaders are sitting between two trends that don't reconcile. The board wants AI-led decarbonization. The data infrastructure underneath isn't built to support the claims being made on top of it.

What's actually happening on the ground

The pattern is consistent across enterprise CPG and industrial operators:

  1. A sustainability mandate lands from the board, often well ahead of CSRD or CBAM deadlines.
  2. Teams build a Scope 3 baseline from supplier surveys, industry-average emission factors, and a thin layer of actually-measured data. Confidence intervals are quietly enormous.
  3. An AI platform — sometimes a startup, sometimes a Tier 1 module — gets layered on top to "improve data quality."

A year in, three things are usually true:

  • Supplier survey response rates plateau well below 50%, so the model is still feeding on industry averages dressed up as primary data.
  • The AI's measurable value concentrates in two narrow places — route optimization and energy anomaly detection at owned facilities. These were already the easiest emissions to attack.
  • The harder questions — raw material substitution, supplier mix shifts, packaging redesign — are still being decided by humans in a meeting room. The AI doesn't help much because the data underneath isn't trustworthy enough.

The regulatory clock has shifted underneath all of this. CBAM left its transitional phase on January 1, 2026 — importers of covered goods now pay for actual certificates. CSRD is live for first-wave companies. Gartner expects 70% of technology sourcing leaders to carry sustainability-aligned performance objectives by 2026. The pressure has moved from the CSO down to procurement and operations, just as the data infrastructure is being asked to do real work for the first time.

Why this is structural, not incidental

This is a sequencing problem, not an execution problem.

Most enterprise supply chains weren't built to emit auditable carbon data. They were built to emit auditable cost and service data. ERP fields, master data hierarchies, supplier onboarding flows — all exist to answer "what did we pay, when did we receive it, did we hit the SLA." Carbon is a derivative metric, calculated downstream by a different team, using different system extracts, against emission factors maintained in a fourth place. Errors compound at every join.

AI is good at modeling on top of a clean substrate. It is bad at fixing the substrate. When the input is a supplier-reported figure that mixes plant-level allocations across three product families, the most sophisticated model produces a confident-looking number that does not survive an audit.

There's a second-order issue almost nobody is pricing in. The compute behind enterprise sustainability AI is non-trivial, and the embodied emissions of the model — training, hosting, inference — sit inside Scope 3 of the vendor, which becomes Scope 3 of the customer. Recent Nature Sustainability work on net-zero pathways for AI servers makes this concrete: data center electricity, water for cooling, hardware refresh cycles all show up in someone's value chain. The accounting standards aren't yet harmonized, so it just disappears for now. That won't last.

What the industry isn't saying out loud

Two things.

First, the most credible AI-driven sustainability work in supply chains today is narrow on purpose. The teams producing real, defensible reductions have stopped trying to model an entire enterprise's Scope 3 footprint with one tool. They pick one or two emissions categories — typically inbound freight or specific raw material flows — instrument those properly, and let AI do the optimization work only where the data is trustworthy. The grand "end-to-end emissions intelligence" pitches haven't held up under audit. The narrow ones have.

Second, the industry is not yet pricing the carbon cost of the AI itself into the cost-benefit case. Vendors quote avoided emissions; almost none quote the embodied emissions of the platform delivering them. As CBAM widens its product scope and CSRD audit pressure increases, "what is the net carbon position of running this AI?" will start showing up in procurement reviews. Most current vendor disclosures are not ready for that question.

Where this leaves operators

The interesting work in 2026 isn't picking an AI-driven sustainability platform. It's deciding which two or three emissions decisions in a given supply chain are worth instrumenting properly first, what data infrastructure those decisions actually require, and where AI genuinely improves the decision over a human with a well-built dashboard.

The mandate shifted. The substrate didn't. Whichever supply chains close that gap first will hold a meaningful advantage when the next regulatory wave lands.

Genuinely curious what people here are seeing:

  • For anyone running a Scope 3 program — what's your supplier survey response rate honestly looking like, and how are you handling the gap?
  • For anyone who's deployed an AI sustainability platform — has it produced an emissions reduction that survived audit, or is it still mostly dashboards?
  • For procurement folks — are sustainability KPIs actually showing up in your performance objectives yet, or is that still a 2027 problem?
  • And the uncomfortable one: is anyone tracking the embodied emissions of their AI stack as part of their Scope 3, or is that just being ignored until regulators force it?

Not selling anything. Just trying to compare notes because the marketing on this category is making it harder, not easier, to figure out what's real.

reddit.com
u/heizen_91 — 9 hours ago
▲ 2 r/procurement+1 crossposts

Loop just raised $95M Series C, and the real story isn't the money. It's where SC AI capital is no longer flowing.

A logistics AI company raising a $95M Series C in this market is itself news. But the more interesting question is what the round isn't, and what that tells you about where supply chain AI is heading.

This round isn't going to a copilot. It isn't going to an "AI-powered visibility platform." It isn't going to a forecasting startup. It's going to a company that started in freight audit/payment workflows and is openly positioning toward autonomous replenishment. That positioning shift is the signal, not the dollar number.

Reading the tea leaves on what the smart money is now buying in SC AI:

1. The copilot wave is functionally over as a fundable category. The 2023–2024 vintage of "AI for supply chain" was almost entirely copilots. Chat-with-your-data, GenAI-on-top-of-the-TMS, conversational planning assistants. A lot of them shipped, some got real revenue, but very few crossed the chasm into mission-critical workflows. VCs have basically stopped writing growth checks into that category. The market made its decision: copilots are a feature, not a company.

2. Capital is flowing to the system-of-action layer. The companies raising real money now are the ones that don't just show you a recommendation — they do the work. Execute the rebook. Run the replenishment cycle. Trigger the supplier order. Close the invoice mismatch. The product is the action. This is the pattern across the last few SC AI rounds, not just Loop.

3. The land-and-expand vector is changing. Old playbook: start with visibility/observability, expand into recommendations, eventually try to get to decisions. That motion is dead for new entrants because incumbents already own visibility. New playbook: start in a narrow, high-frequency execution workflow (freight audit, invoice matching, expedite booking, tail-spend sourcing), prove autonomous execution there, then expand upstream into the decisions that drive those workflows. Loop's freight-audit → autonomous-replenishment trajectory is a textbook version of this.

4. The "boring back-office" is suddenly the prize. Five years ago, AP/AR automation, freight audit, claims processing, invoice reconciliation were unsexy back-office categories with mid-cap private equity buyers, not venture money. Now they're hot because they're (a) high-volume, (b) high-frequency, (c) rules-heavy with enough exceptions to be hard, and (d) directly adjacent to working capital. That's exactly where agents create disproportionate value. Capital follows.

5. Multi-workflow ambition is back in fashion. For a while, vertical SaaS orthodoxy said pick one workflow and dominate it. The current round of SC AI fundraising rewards companies that have a credible path from one workflow into adjacent ones — because the underlying agent infrastructure is reusable across them. A freight audit company moving into replenishment isn't doing scope creep; it's doing the obvious thing once you have the data and the action layer.

What this should change in enterprise SC leaders' roadmaps:

  • If your 2026 RFP for supply chain AI is still scored on "forecast accuracy" and "dashboard quality," you're going to buy yesterday's category at tomorrow's prices.
  • The new RFP scoring criteria worth borrowing: % of decisions executed autonomously, time-to-action, exception rate, override rate, dollars of working capital actually moved.
  • Build vs. buy on autonomous execution is genuinely hard right now. The platforms aren't mature enough to buy off the shelf for every workflow, but they're too capital-intensive to build internally for most enterprises. The middle path most large companies are landing on: buy autonomy for high-frequency execution workflows, build orchestration in-house, keep strategic decisions human-owned.
  • Watch for the incumbent response. The big SCM/TMS vendors are going to acquire their way into this. Anyone with $200M+ in ARR and an "autonomous" angle is now an acquisition target.

The losers in this shift, roughly in order:

  • Pure-play forecasting and visibility startups still trying to raise at 2022 multiples.
  • Legacy planning suites that took five years to bolt on "AI" as a marketing layer and didn't change the underlying architecture.
  • Internal data science teams that spent three years building beautiful predictive models nobody operationalized.

The winners:

  • Companies that started in a narrow execution workflow and are credibly expanding.
  • Enterprises that move early on agent-led workflows in the back office and free up working capital before their competitors.
  • Operators (mid-career SC and procurement professionals) who learn to design agent guardrails and supervise autonomous workflows. This is going to be the most valuable skill in the function over the next 36 months.

Genuinely curious what folks here read into the round:

  • For anyone in SC AI venture / corp dev — what's the deal flow look like right now? Is the autonomous-execution thesis as concentrated as it looks from the outside, or am I seeing a pattern that isn't there?
  • For practitioners — are you actually seeing the pitch evolve from "copilot for your team" to "agent that runs the workflow"? Or is it still mostly rebranded copilots?
  • For anyone at one of the incumbents — what's the internal urgency level on this? Is this a "we'll acquire our way in" conversation or a "we need to rebuild" one?

Not commenting on Loop specifically — they're one data point. The category shift is the actual story.

reddit.com
u/heizen_91 — 9 hours ago

Predictive AI in supply chain peaked in 2024. Agentic is eating it, and most vendors won't say it out loud.

Bit of a hot take, but the more time I spend in supply chain rooms the more confident I am: predictive AI as a standalone category in supply chain has roughly 18–24 months left as a buying motion. It's already losing to agentic, and the transition is going to be brutal for a lot of vendors.

Quick definitions because everyone uses these terms interchangeably and it makes conversations useless:

Predictive AI = looks at data, produces a number or a flag. Demand forecast, lead time prediction, anomaly score, supplier risk rating, ETA prediction. Output is information. A human or another system decides what to do with it.

Agentic AI = takes goals and constraints, makes decisions, executes actions, and adapts. Runs the replenishment cycle, negotiates with suppliers within guardrails, reroutes shipments, raises POs, resolves invoice mismatches. Output is action, not information.

The reason predictive is getting eaten isn't that the predictions got worse. They got better. The reason is that prediction-without-action was always the worse half of the value chain, and we collectively spent five years pretending it wasn't.

Here's the pattern I keep seeing:

The forecast was never the bottleneck. Companies that deployed best-in-class ML forecasting in 2022–2024 got their MAPE down meaningfully and then... didn't capture most of the value. Why? Because the downstream planners still overrode the model, the buyers still used their gut, the S&OP meeting still ran the same way. The forecast got better. The decisions didn't. Agents close that loop by actually executing on the prediction.

The exception queue ate the savings. Predictive systems generate alerts. Risk alerts, anomaly alerts, deviation alerts. In production, the exception queue at most enterprise SC teams runs into the thousands per week. Humans triage maybe 10%. The other 90% are noise or get ignored. Agents don't generate alerts for humans — they handle exceptions themselves and escalate only the truly novel ones. Same prediction quality, 10x the realized value.

Predictions degrade in volatile environments. Agents adapt. A demand forecast trained on 2019–2023 data is in trouble right now. Tariff whiplash, geopolitical reshuffling, channel mix shifts — the world doesn't look like the training distribution. Predictive systems quietly get worse and the org doesn't notice until inventory blows up. Agentic systems can re-plan in real time against current state, not historical patterns.

The buying motion changed. CFOs and COOs are no longer impressed by "we improved forecast accuracy by 15%." They've heard it. They want to hear "we removed 40% of manual touches from the procure-to-pay cycle" or "we cut expedite freight by $8M because the agent reroutes autonomously." Predictive value props don't land in 2026 budget conversations. Agentic ones do.

What this actually looks like on the ground:

  • Demand planning teams that used to be 30 people running a forecasting platform are becoming 8 people overseeing an agentic planning system that uses a forecast internally but isn't sold to leadership as a forecasting tool.
  • Procurement category teams that used to run sourcing events on a digital platform are letting agents run the events end-to-end on category tail spend, with humans only on strategic categories.
  • Logistics control towers that used to be visualization dashboards are becoming decision engines — the agent reroutes, the dashboard just shows you what it did.
  • Supplier risk platforms that used to push alerts to procurement are now triggering auto-mitigation flows (dual-source activation, contract clause invocation, inventory rebalancing) before the human even sees the risk.

In every case: the prediction is still happening underneath. But the prediction is no longer the product. The action is the product.

The vendors most at risk are the ones who built pure prediction platforms with a thin "recommendation" layer on top. Those are about to look like reporting tools. The vendors that win will be the ones whose product is the agent — and prediction is just a service inside it.

A few uncomfortable implications:

  • If your supply chain AI roadmap for 2026 still has "improve forecast accuracy" as a top-three initiative, you're solving last decade's problem.
  • The skills gap is widening fast. Demand planners and category managers need to learn to design agent guardrails, not tune forecasts.
  • The vendor consolidation is going to be wild. Half the "AI supply chain" companies funded between 2021 and 2024 are sitting on predictive-only architectures.

Counter-arguments I'd expect, because I keep hearing them:

"Agentic isn't ready for production." For some workflows, true. For tail-spend procurement, invoice matching, replenishment of A/B class SKUs, transportation rebooking — it's already in production at scale at multiple Fortune 500s.

"You still need predictions inside the agent." Yes, obviously. The point isn't that prediction goes away. It's that prediction stops being the product you buy or the team you build.

"Humans need to stay in the loop." For strategic decisions, absolutely. But "human in the loop" is becoming "human on the loop" — supervising, setting policy, handling exceptions. Not approving every PO.

Genuinely curious what folks here think:

  • For practitioners — is your org actively moving budget from predictive projects to agentic ones, or is it still being sold as additive?
  • For anyone at a forecasting/predictive vendor — what's the internal conversation about this? Are you repositioning, or doubling down?
  • For consultants — what percentage of your current SC AI engagements are predictive vs. agentic vs. mixed? Curious how fast the mix is shifting.

And the meta-question: am I overcalling this? Is there a scenario where predictive holds its ground as a standalone category, or is the writing on the wall?

reddit.com
u/heizen_91 — 9 hours ago
▲ 0 r/CFO+1 crossposts

CFOs are quietly panicking about tariff whiplash, and supply chain is the only function that can actually answer their questions

Spent the last few weeks in rooms with three different CFOs at mid-to-large industrials. Different sectors, different geographies. Same conversation, almost word for word:

"I cannot tell my board what our margin looks like next quarter, because I don't know what the tariff schedule will be next month. And nobody in my organization can model it fast enough for me to make a decision before it changes again."

That's the actual problem right now. Not tariffs themselves — companies have dealt with tariffs forever. It's the cadence. Policy is changing on weekly timescales, but enterprise planning still runs on quarterly cycles. The gap is where margin goes to die.

Some numbers that have been making the rounds in finance circles:

  • A 10% shift in landed cost on a single major input can swing operating margin 200–400bps for industrial manufacturers. That's a board-reportable event.
  • The average S&OP cycle is 4–6 weeks. Tariff announcements are now landing inside that window, sometimes twice.
  • Working capital tied up in pre-tariff buffer inventory has become a real line item in finance reviews. I've seen it called "policy hedge inventory" in one company's internal docs.
  • The cost of being wrong on a single sourcing decision has gone up 5–10x compared to pre-2024 baselines because reversals are slow and expensive.

So CFOs are asking questions supply chain has never been built to answer in real time:

  • If Mexico tariffs go to 25% next month, what happens to gross margin by product line?
  • If China steel duties drop and Vietnam stays flat, where should we shift volume, and how fast can we actually do it?
  • What's our exposure on contracts signed at current landed cost if duties move 15%?
  • How much working capital is locked up in tariff-driven buffer stock, and what's the carrying cost?
  • If we lose our Canadian supplier overnight, what's the 30/60/90-day P&L impact?

The honest answer in most companies right now is: we don't know, and we'll get back to you in three weeks with a deck. By then the tariff has changed twice.

This is what's driving the quiet rise of scenario-simulating supply chains. The idea isn't new — Monte Carlo, digital twins, agent-based modeling have all existed for years. What's changed is the urgency and who's funding it. It used to be a supply chain VP's pet project. Now it's a CFO line item.

A few things I'm seeing companies actually do:

1. Tariff exposure dashboards owned by FP&A, not supply chain. The data lives in supply chain systems, but the surface where the CFO interacts with it is owned by finance. This sounds like a small org change. It isn't. It's the only way the answers get used.

2. Pre-built scenario libraries. Instead of building a custom model when a tariff announcement hits, companies are pre-modeling 20–50 plausible policy scenarios in advance. When news drops, you're picking from a library, not building from scratch. Cuts response time from weeks to hours.

3. Probabilistic sourcing decisions. Instead of "we will dual-source from Vietnam," it's "we will hold optionality on three regions and shift volume dynamically based on landed cost and lead time, re-evaluated monthly." This requires contracts that didn't exist five years ago.

4. Margin-at-risk reporting alongside VaR. Treasury has been doing Value-at-Risk on FX and rates forever. Supply chain is starting to produce the equivalent for input costs. CFOs love it because it speaks their language.

5. Quarterly board reporting that includes scenario fan charts. Not point forecasts. A spread. "Here's our base case operating margin, and here's the P5–P95 band given tariff volatility." Some boards are starting to require this.

The companies that figure this out get a real edge. The ones that don't keep getting blindsided every six weeks and burning working capital on reactive buffer inventory.

Curious what folks here are seeing. A few specific questions:

  • For anyone in FP&A or supply chain finance — is your CFO asking these questions, and who in the org actually owns the answer?
  • Has anyone built a scenario library that actually got used in a real decision, or is it shelfware?
  • For consultants / vendors — what's the realistic build vs. buy on this? Every major SCM platform claims scenario simulation now and most of it seems thin.
  • And the uncomfortable one: how much of the "AI scenario planning" being sold right now is just a Monte Carlo wrapper on a forecast?

Not pitching anything, just trying to compare notes. The vendor marketing on this is so loud right now that the actual practitioner reality is hard to find.

reddit.com
u/heizen_91 — 9 hours ago
▲ 1 r/Procurement_HI_AI+1 crossposts

AI demand forecasting actually works — but 80% of enterprise rollouts fail before they prove it. Here's what I keep seeing.

I've now sat through enough "we tried AI forecasting and it didn't work" conversations to notice a pattern. The technology isn't the problem. The rollout is.

Some context: modern ML forecasting (gradient boosting ensembles, temporal fusion transformers, hierarchical models with reconciliation) consistently beats classical methods by 15–40% on MAPE for the SKUs where it should — mid-velocity, seasonal, promotion-driven, multi-channel. That's been settled science for years now. The M5 competition basically closed the debate.

And yet, most enterprise pilots stall. Not because the models are bad. Because of stuff like this:

1. Starting with the wrong SKUs. Teams almost always pilot on either their top 20 runners (where classical forecasting is already 95% accurate and AI has no room to win) or their long-tail intermittent items (where no model wins because there's no signal). The sweet spot — the 60% of SKUs in the middle that drive most of your working capital pain — is where you should pilot. Nobody does this.

2. No causal features, just history. I've seen six-figure ML platforms deployed where the only input is shipment history. That's expensive ARIMA. The whole point of ML forecasting is that you can shove in promotions, pricing, weather, web traffic, competitor stockouts, macroeconomic signals, and let the model figure out the interactions. If your data team can't get promo calendars and price changes into the feature store, don't bother starting.

3. Forecasting at the wrong grain. Forecasting daily SKU-store when the business plans weekly SKU-DC is a guaranteed disappointment. The model can be technically more accurate and still useless because nothing downstream can consume it. Hierarchical reconciliation is the unsexy part of the stack that actually decides whether the forecast ships.

4. No one owns the override behavior. Demand planners will override the model. That's fine — sometimes they should. But if you don't measure override accuracy vs. model accuracy and feed that back, you end up with a forecasting system that's actually just a planner's gut with extra steps. Half the orgs I've seen don't track this at all.

5. Treating it as an IT project, not a planning transformation. This is the big one. If the S&OP process, KPIs, and incentive structures don't change, the forecast doesn't matter. Planners still get yelled at for stockouts and forgiven for overstock, so they still bias high. Sales still sandbags. The model produces a beautiful unbiased forecast that nobody uses.

A rough playbook that actually seems to work:

  • Pick 1 category, mid-velocity SKUs, 12-week pilot. Not the whole portfolio.
  • Get clean POS + promo + price data into a feature store before picking a model.
  • Run the new model in shadow mode for 6 weeks against the incumbent forecast. Measure MAPE, bias, and forecast value-add (FVA) at the grain that drives the actual buy decision.
  • Define what "good" looks like upfront: e.g., "10pp MAPE improvement on B-class SKUs, with no degradation on A-class." Without this, every result is debatable.
  • Tie forecast accuracy to a working-capital outcome (inventory turns, stockout rate, expedite freight spend). Forecast accuracy alone is not a business case.
  • Build the override-tracking loop on day one, not month six.
  • Plan the S&OP process change in parallel with the model build, not after.

The orgs I've seen do this well aren't the ones with the biggest data science teams. They're the ones where the VP of Supply Chain personally owns the rollout and isn't afraid to change the planning process.

Genuinely curious what folks here have seen. Anyone running ML forecasting in production at scale? What was the thing that almost killed your rollout — was it data, change management, or the model itself? And for anyone earlier in the journey — what's the biggest blocker right now?

Not pitching anything. Just trying to compare notes because the public case studies are all sanitized to uselessness.

reddit.com
u/heizen_91 — 9 hours ago
▲ 1 r/Procurement_HI_AI+1 crossposts

The workforce question no one wants to answer: what happens when AI agents run 60% of procurement?

I've been having a lot of conversations with procurement leaders lately, and there's a topic everyone dances around in public but talks about openly over coffee: agentic AI is about to eat a huge chunk of procurement work, and nobody has a real plan for the people.

Not "AI will augment your team." Not "humans + AI partnership." I'm talking about agents that autonomously run RFQs, negotiate with suppliers within set guardrails, raise POs, chase invoices, flag contract risk, and reconcile three-way matches — end to end, with a human only stepping in for exceptions.

The math gets uncomfortable fast. A typical mid-market procurement org has 60–70% of its headcount on transactional and tactical work: sourcing execution, supplier onboarding, PO management, invoice matching, expediting, basic category analytics. That's exactly the work agents are now demonstrably good at. The remaining 30–40% — strategic sourcing, supplier relationship management, risk, ESG, complex negotiations — still needs humans, but it doesn't need that many humans.

So the honest question: if agents credibly take 60% of the workload in the next 3–5 years, what actually happens?

A few scenarios I keep going back and forth on:

  1. The "everyone moves up the value chain" story. Tactical buyers become category strategists. Sounds great. But not every tactical buyer wants to be — or can be — a strategist. And the math doesn't work: you don't need 50 strategists where you had 50 buyers.
  2. The quiet attrition path. No layoffs, no announcements. Just don't backfill. Hiring freezes for 2–3 years and the org shrinks by 40% through natural turnover. This is probably what most companies will actually do.
  3. The CFO-led contraction. Procurement becomes a 5-person team running 50 agents, reporting into finance. The function as we know it basically disappears.
  4. The supplier-side mirror. This one nobody talks about. If buyers deploy agents, suppliers will too. We end up with bot-on-bot negotiation, and procurement value shifts entirely to whoever designs the better guardrails and incentive structures.

What I haven't seen anywhere yet:

  • A serious workforce transition plan from any major company
  • Honest conversations with procurement teams about what their job looks like in 2028
  • Universities adjusting supply chain curricula for this
  • Any union or professional body (ISM, CIPS) staking out a clear position

Genuinely curious what folks here think — especially anyone working in procurement right now. Are you seeing agentic pilots in your org? Is leadership talking about the headcount implications, or is it all "productivity gains" framing? And if you're early-career in procurement, are you re-thinking your path?

Not trying to be doomer about it. I actually think the work that's left is more interesting. But pretending the workforce shift isn't coming feels like the same mistake retail and customer service made five years ago.

reddit.com
u/heizen_91 — 9 hours ago
▲ 1 r/procurement+1 crossposts

Pilots work, rollouts die — three reasons enterprise AI forecasting programs keep stalling

I've spent the last two years close to enterprise S&OP teams working on AI forecasting rollouts. Pilots usually look great. Rollouts die.

The data is now public on this. Gartner has fewer than 30% of supply chain AI pilots reaching production. MIT's NANDA study in July put 95% of enterprise AI pilots at zero measurable ROI. BCG has 74% of companies failing to extract value from AI investments at scale.

So why does this keep happening?

After enough rollouts, the failure modes are pretty boring and pretty consistent. Posting here because I want to know if others are seeing the same thing.

1. The data pipeline isn't budgeted for.

POS, ERP, weather, macro signals, promo calendars — all in different systems with different cadences and identifiers. Reconciling them is genuinely 40–60% of the real project cost.

Nobody scopes for this. The CFO funds licenses because licenses are easy to approve. They don't fund the integration layer, because no vendor sells "data plumbing redesign" as a SKU. The project ends up underfunded on the one layer that determines whether the model ever sees clean inputs.

2. The planner workflow doesn't change.

You drop an AI forecast into a planning process designed in 2003 and watch it get overridden the first time it disagrees with the planner's gut. I've seen 40%+ override rates at production-stage rollouts.

Here's the part nobody likes to admit. Across 15 years of academic Forecast Value Added research, only about half of manual planner overrides actually improve accuracy. The other half degrade it or are net-neutral.

The standard reaction is to call this a "change management" problem. It isn't. Planners override because they hold context the model doesn't see — promo calls that aren't logged, quality holds, competitor stockouts, customer noise that hasn't propagated. The honest question isn't "how do we reduce overrides" — it's "what context are planners encoding manually that we've failed to encode in the system?"

That's a feature engineering problem. Not a behavioral one.

3. It's sold as a platform, not an outcome.

Two-year implementation, seat-based pricing, multi-edition product. Deloitte has enterprise AI payback periods stretching to 2–4 years versus the historical analytics norm of 7–12 months.

By month nine your exec sponsor has rotated, the vendor's roadmap has drifted, and the original business case isn't the case anymore. The contract length is optimal for the vendor's recurring revenue model. It is structurally wrong for a CSCO trying to move inventory dollars in the current planning cycle.

The bigger structural read

These aren't separate problems. They're the predictable output of how enterprise forecasting is bought, built, and governed.

Data lives in IT. The model lives in analytics. The planner sits in supply chain. Inventory accountability sits in ops. The CFO funds the program against a payback case that doesn't include any of the layers that actually determine whether the model reaches the order book.

The metric mismatch is the cleanest tell. Most published AI forecasting case studies report MAPE or WAPE at the SKU-week level. Boards don't fund SKU-week MAPE. They fund inventory turns, service level, working capital, write-down avoidance. With a 40% override rate, the published model accuracy isn't the accuracy that reaches the order book. The number CFOs would actually care about — post-override accuracy — almost no program reports.

TL;DR

Enterprise AI forecasting programs don't fail because the models are bad. They fail because (1) the data layer is underfunded, (2) the planner workflow isn't redesigned, and (3) the contract is structured for vendor revenue rather than operating outcomes. The disillusionment showing up in 2026 isn't an AI failure — it's an operating-model failure.

Curious if others are seeing the same three modes, or if there's a fourth I'm missing. Also: has anyone actually cracked the post-override accuracy reporting problem at scale? That feels like the metric the whole industry should be using and almost no one is.

reddit.com
u/heizen_91 — 10 hours ago

Pilots work, rollouts die — three reasons enterprise AI forecasting programs keep stalling

I've spent the last two years close to enterprise S&OP teams working on AI forecasting rollouts. Pilots usually look great. Rollouts die.

The data is now public on this. Gartner has fewer than 30% of supply chain AI pilots reaching production. MIT's NANDA study in July put 95% of enterprise AI pilots at zero measurable ROI. BCG has 74% of companies failing to extract value from AI investments at scale.

So why does this keep happening?

After enough rollouts, the failure modes are pretty boring and pretty consistent. Posting here because I want to know if others are seeing the same thing.

1. The data pipeline isn't budgeted for.

POS, ERP, weather, macro signals, promo calendars — all in different systems with different cadences and identifiers. Reconciling them is genuinely 40–60% of the real project cost.

Nobody scopes for this. The CFO funds licenses because licenses are easy to approve. They don't fund the integration layer, because no vendor sells "data plumbing redesign" as a SKU. The project ends up underfunded on the one layer that determines whether the model ever sees clean inputs.

2. The planner workflow doesn't change.

You drop an AI forecast into a planning process designed in 2003 and watch it get overridden the first time it disagrees with the planner's gut. I've seen 40%+ override rates at production-stage rollouts.

Here's the part nobody likes to admit. Across 15 years of academic Forecast Value Added research, only about half of manual planner overrides actually improve accuracy. The other half degrade it or are net-neutral.

The standard reaction is to call this a "change management" problem. It isn't. Planners override because they hold context the model doesn't see — promo calls that aren't logged, quality holds, competitor stockouts, customer noise that hasn't propagated. The honest question isn't "how do we reduce overrides" — it's "what context are planners encoding manually that we've failed to encode in the system?"

That's a feature engineering problem. Not a behavioral one.

3. It's sold as a platform, not an outcome.

Two-year implementation, seat-based pricing, multi-edition product. Deloitte has enterprise AI payback periods stretching to 2–4 years versus the historical analytics norm of 7–12 months.

By month nine your exec sponsor has rotated, the vendor's roadmap has drifted, and the original business case isn't the case anymore. The contract length is optimal for the vendor's recurring revenue model. It is structurally wrong for a CSCO trying to move inventory dollars in the current planning cycle.

The bigger structural read

These aren't separate problems. They're the predictable output of how enterprise forecasting is bought, built, and governed.

Data lives in IT. The model lives in analytics. The planner sits in supply chain. Inventory accountability sits in ops. The CFO funds the program against a payback case that doesn't include any of the layers that actually determine whether the model reaches the order book.

The metric mismatch is the cleanest tell. Most published AI forecasting case studies report MAPE or WAPE at the SKU-week level. Boards don't fund SKU-week MAPE. They fund inventory turns, service level, working capital, write-down avoidance. With a 40% override rate, the published model accuracy isn't the accuracy that reaches the order book. The number CFOs would actually care about — post-override accuracy — almost no program reports.

TL;DR

Enterprise AI forecasting programs don't fail because the models are bad. They fail because (1) the data layer is underfunded, (2) the planner workflow isn't redesigned, and (3) the contract is structured for vendor revenue rather than operating outcomes. The disillusionment showing up in 2026 isn't an AI failure — it's an operating-model failure.

Curious if others are seeing the same three modes, or if there's a fourth I'm missing. Also: has anyone actually cracked the post-override accuracy reporting problem at scale? That feels like the metric the whole industry should be using and almost no one is.

reddit.com
u/heizen_91 — 10 hours ago
▲ 14 r/Procurement_HI_AI+2 crossposts

Bain says agentic AI delivers 60% procurement productivity gains, but only 5% of orgs have it deployed. The gap isn't a tool problem.

Working through Bain's new report "The Rise of Autonomous, Intelligent Procurement" and a few stats stuck out:

- 60%+ procurement productivity gain where AI is effectively deployed

- 3–7% incremental savings on spend

- $180M projected from a single scaled agentic deployment

- ROI up to 5x

The part I keep circling back to: only ~5% of procurement orgs have AI fully deployed. ~60% are in planning or pilot.

Default read I'm seeing on LinkedIn this week is basically "pick the right agentic source-to-pay vendor and capture the upside." I don't think that's what the report actually says.

A sourcing tool waits for a buyer to specify the category, suppliers, criteria, timing. A sourcing agent monitors the category continuously, decides when an event is warranted, prepares the tender, qualifies suppliers, and surfaces a buyer only when a strategic trade-off needs human judgment.

That's not a software upgrade. That's a change in who initiates action — and most enterprise S2P stacks weren't built to host autonomous agents alongside human buyers in the same category.

McKinsey's recent work points the same way — they cite a chemicals company piloting autonomous sourcing in consumables that lifted staff efficiency 20–30% and pushed value capture up 1–3% on the spend in scope. The wins all come from workflow redesign, not vendor swap.

Curious what people on the inside are actually seeing:

- For those piloting AI agents in procurement — what's the actual blocker? Data? Governance? Change management? Vendor immaturity?

- Has anyone seen a deployment where the workflow was redesigned first vs. agents bolted onto existing source-to-pay?

- Are your suppliers deploying agents on their side yet? (My read is the buyer-with-tools / supplier-with-agents asymmetry is going to bite first.)

reddit.com
u/heizen_91 — 9 hours ago