u/mangoman40114

Bootstrap on my first 421 picks: 88% confidence of long-run +ROI, but I'm 42.8% straight up. What am I missing?

Spent the last few months building a probabilistic prediction model for NBA and MLB game outcomes. Standard hobbyist stack: Elo + recent form + injury drag + pitcher-level priors for MLB + line-movement signal + per-sport calibration shrink. Outputs a calibrated p(side wins) for each market.

Yesterday I finally ran proper validation on 421 settled picks and the result is interesting enough I want to ask for methodology critique.

**The headline tension:**

* Raw hit rate: 42.8% (n=421, Wilson 95% CI [38.1%, 47.5%])

* Sounds bad. Standard -110 breakeven is 52.4% so naive read is "model is losing."

* But mean decimal odds taken is 2.94 (model picks a lot of dogs and small parlays), so actual mix breakeven is 42.4%.

* Bootstrap on actual P/L (1000 resamples, 1u stakes): mean ROI +8.6%, 95% CI [-5.4%, +22.4%], P(ROI > 0) = 0.885.

Per sport:

* MLB n=322: hit_rate 44.7%, breakeven 43.9%, bootstrap mean ROI +6.65%, P(>0) = 0.798

* NBA n=94: hit_rate 38.3%, breakeven 37.9%, bootstrap mean ROI +19.94%, P(>0) = 0.851

So the bootstrap is saying long-run +EV is more likely than not, but I'm at the sample size where confidence intervals on ROI still cross zero. The "I'm losing because hit rate is below 50%" naive read is misleading because the bet mix has different breakevens.

**The validation finding (the actual question):**

I bucket every pick into confidence tiers based on (model_p, fanduel_edge). The CLV-aware data on the top tier surprised me:

* Top tier (n=108 settled, 5 with closing-line data): 100% beat the closing line, +21.27pt avg CLV, +24.56% bucket ROI

* Middle tier (n=199, 19 with CLV): 73.7% beat-close, +1.46pt avg CLV, +8.06% ROI

* Auto-parlay tier (n=86): 25% hit, -18.81% ROI. This is broken. Generation thresholds were too loose.

The high-confidence tier is doing real work: 100% beat-close (small sample but consistent direction) plus +21pt CLV says the model is picking the sharper side of the market on its strongest signals. The auto-parlay tier is hemorrhaging because parlay miscalibration compounds multiplicatively while my per-sport calibration shrink is tuned for singles.

**What I'd love methodology feedback on:**

  1. **Per-tier-vs-parlay calibration.** I shrink model_p toward 0.5 based on per-(sport, market_type) historical hit-rate gaps. Singles are well-calibrated. When I multiply N calibrated leg probabilities to get a parlay prob, miscalibration compounds and the parlay prob is consistently overstated. Has anyone solved this cleanly: leg-level Platt scaling tuned specifically for parlay use, hierarchical Bayesian per-leg priors, something else?
  2. **CLV stamping coverage.** I currently have closing-line data on only 24 of 421 settled picks because the snapshot loop wasn't reliably running for the first months. Going forward every new pick gets stamped automatically. Should I weight calibration adjustments toward CLV-validated rows even at small n, or wait for more data?
  3. **Bootstrap interpretation.** With P(ROI > 0) = 0.885 and 95% CI crossing zero, what's the responsible way to communicate this externally? "Probably profitable" feels honest but is harder to falsify than a Sharpe-style number. Curious how people working on similar discrete-outcome prediction systems frame their confidence.

Open-book journal where every pick before kickoff is logged and graded automatically against ESPN's scoreboard. Happy to share the link in a comment if useful for context; not the point of the post.

reddit.com
u/mangoman40114 — 1 day ago

Building a sports outcome predictor that uses Polymarket as a calibration anchor. Open-book journal, 421 settled picks, looking for feedback.

Been building a probability model for NBA and MLB outcomes the last few months. One of the more interesting things I've ended up with is a side-by-side comparison of my model's prediction vs the corresponding Polymarket implied probability on each game.

The thinking: PM is the closest thing to a calibrated, real-money truth source for sports outcomes. FanDuel/DraftKings lines are anchored to vig and book risk, not raw probability. So when I want to ask "is my model overconfident on a specific bucket," PM is the best second opinion I have.

What I've actually seen across 421 settled picks:

- Hit rate 42.8% on a bet mix where actual breakeven (given the odds taken) is 42.4%

- Bootstrap on the P/L: mean ROI +8.6%, 95% CI [-5.4%, +22.4%], P(ROI > 0) = 0.885

- Top confidence tier (n=108): 51% hit, +24.56% bucket ROI, and on the small subset where I have closing-line data, 100% beat the close

The model uses PM as a soft anchor on its blended probability when PM volume is high enough to trust. For low-volume markets PM is too noisy and I let the model run on its own intrinsic features.

Two questions I'd love feedback on from this sub specifically:

  1. How are people thinking about PM volume thresholds for sports markets? My current cutoff is fairly arbitrary. I'm interested in whether someone has actually done the work to find the volume floor below which PM stops being informative.

  2. For the sports markets that DO have good PM volume, are people building models that train against PM's prices as ground-truth probability, or just using them as a calibration check after the fact?

The site has an open-book journal of every pick before kickoff, graded automatically vs ESPN's final score. Happy to share the link in a comment if helpful for context.

reddit.com
u/mangoman40114 — 1 day ago

Bootstrap on my first 421 picks: 88% confidence of long-run +ROI, but I'm 42.8% straight up. What am I missing?

Spent the last few months building a probabilistic prediction model for NBA and MLB game outcomes. Standard hobbyist stack: Elo + recent form + injury drag + pitcher-level priors for MLB + line-movement signal + per-sport calibration shrink. Outputs a calibrated p(side wins) for each market.

Yesterday I finally ran proper validation on 421 settled picks and the result is interesting enough I want to ask for methodology critique.

**The headline tension:**

* Raw hit rate: 42.8% (n=421, Wilson 95% CI [38.1%, 47.5%])

* Sounds bad. Standard -110 breakeven is 52.4% so naive read is "model is losing."

* But mean decimal odds taken is 2.94 (model picks a lot of dogs and small parlays), so actual mix breakeven is 42.4%.

* Bootstrap on actual P/L (1000 resamples, 1u stakes): mean ROI +8.6%, 95% CI [-5.4%, +22.4%], P(ROI > 0) = 0.885.

Per sport:

* MLB n=322: hit_rate 44.7%, breakeven 43.9%, bootstrap mean ROI +6.65%, P(>0) = 0.798

* NBA n=94: hit_rate 38.3%, breakeven 37.9%, bootstrap mean ROI +19.94%, P(>0) = 0.851

So the bootstrap is saying long-run +EV is more likely than not, but I'm at the sample size where confidence intervals on ROI still cross zero. The "I'm losing because hit rate is below 50%" naive read is misleading because the bet mix has different breakevens.

**The validation finding (the actual question):**

I bucket every pick into confidence tiers based on (model_p, fanduel_edge). The CLV-aware data on the top tier surprised me:

* Top tier (n=108 settled, 5 with closing-line data): 100% beat the closing line, +21.27pt avg CLV, +24.56% bucket ROI

* Middle tier (n=199, 19 with CLV): 73.7% beat-close, +1.46pt avg CLV, +8.06% ROI

* Auto-parlay tier (n=86): 25% hit, -18.81% ROI. This is broken. Generation thresholds were too loose.

The high-confidence tier is doing real work: 100% beat-close (small sample but consistent direction) plus +21pt CLV says the model is picking the sharper side of the market on its strongest signals. The auto-parlay tier is hemorrhaging because parlay miscalibration compounds multiplicatively while my per-sport calibration shrink is tuned for singles.

**What I'd love methodology feedback on:**

  1. **Per-tier-vs-parlay calibration.** I shrink model_p toward 0.5 based on per-(sport, market_type) historical hit-rate gaps. Singles are well-calibrated. When I multiply N calibrated leg probabilities to get a parlay prob, miscalibration compounds and the parlay prob is consistently overstated. Has anyone solved this cleanly: leg-level Platt scaling tuned specifically for parlay use, hierarchical Bayesian per-leg priors, something else?

  2. **CLV stamping coverage.** I currently have closing-line data on only 24 of 421 settled picks because the snapshot loop wasn't reliably running for the first months. Going forward every new pick gets stamped automatically. Should I weight calibration adjustments toward CLV-validated rows even at small n, or wait for more data?

  3. **Bootstrap interpretation.** With P(ROI > 0) = 0.885 and 95% CI crossing zero, what's the responsible way to communicate this externally? "Probably profitable" feels honest but is harder to falsify than a Sharpe-style number. Curious how people working on similar discrete-outcome prediction systems frame their confidence.

Open-book journal where every pick before kickoff is logged and graded automatically against ESPN's scoreboard. Happy to share the link in a comment if useful for context; not the point of the post.

reddit.com
u/mangoman40114 — 1 day ago

Sports outcome probability model: bootstrap +EV, hit rate -EV. Methodology critique requested

Spent the last few months building a probabilistic prediction model for NBA and MLB game outcomes. Standard hobbyist stack: Elo + recent form + injury drag + pitcher-level priors for MLB + line-movement signal + per-sport calibration shrink. Outputs a calibrated p(side wins) for each market.

Yesterday I finally ran proper validation on 421 settled picks and the result is interesting enough I want to ask for methodology critique.

**The headline tension:**

* Raw hit rate: 42.8% (n=421, Wilson 95% CI [38.1%, 47.5%])

* Sounds bad. Standard -110 breakeven is 52.4% so naive read is "model is losing."

* But mean decimal odds taken is 2.94 (model picks a lot of dogs and small parlays), so actual mix breakeven is 42.4%.

* Bootstrap on actual P/L (1000 resamples, 1u stakes): mean ROI +8.6%, 95% CI [-5.4%, +22.4%], P(ROI > 0) = 0.885.

Per sport:

* MLB n=322: hit_rate 44.7%, breakeven 43.9%, bootstrap mean ROI +6.65%, P(>0) = 0.798

* NBA n=94: hit_rate 38.3%, breakeven 37.9%, bootstrap mean ROI +19.94%, P(>0) = 0.851

So the bootstrap is saying long-run +EV is more likely than not, but I'm at the sample size where confidence intervals on ROI still cross zero. The "I'm losing because hit rate is below 50%" naive read is misleading because the bet mix has different breakevens.

**The validation finding (the actual question):**

I bucket every pick into confidence tiers based on (model_p, fanduel_edge). The CLV-aware data on the top tier surprised me:

* Top tier (n=108 settled, 5 with closing-line data): 100% beat the closing line, +21.27pt avg CLV, +24.56% bucket ROI

* Middle tier (n=199, 19 with CLV): 73.7% beat-close, +1.46pt avg CLV, +8.06% ROI

* Auto-parlay tier (n=86): 25% hit, -18.81% ROI. This is broken. Generation thresholds were too loose.

The high-confidence tier is doing real work: 100% beat-close (small sample but consistent direction) plus +21pt CLV says the model is picking the sharper side of the market on its strongest signals. The auto-parlay tier is hemorrhaging because parlay miscalibration compounds multiplicatively while my per-sport calibration shrink is tuned for singles.

**What I'd love methodology feedback on:**

  1. **Per-tier-vs-parlay calibration.** I shrink model_p toward 0.5 based on per-(sport, market_type) historical hit-rate gaps. Singles are well-calibrated. When I multiply N calibrated leg probabilities to get a parlay prob, miscalibration compounds and the parlay prob is consistently overstated. Has anyone solved this cleanly: leg-level Platt scaling tuned specifically for parlay use, hierarchical Bayesian per-leg priors, something else?

  2. **CLV stamping coverage.** I currently have closing-line data on only 24 of 421 settled picks because the snapshot loop wasn't reliably running for the first months. Going forward every new pick gets stamped automatically. Should I weight calibration adjustments toward CLV-validated rows even at small n, or wait for more data?

  3. **Bootstrap interpretation.** With P(ROI > 0) = 0.885 and 95% CI crossing zero, what's the responsible way to communicate this externally? "Probably profitable" feels honest but is harder to falsify than a Sharpe-style number. Curious how people working on similar discrete-outcome prediction systems frame their confidence.

Open-book journal where every pick before kickoff is logged and graded automatically against ESPN's scoreboard. Happy to share the link in a comment if useful for context; not the point of the post.

reddit.com
u/mangoman40114 — 1 day ago

Bootstrap on my first 421 picks: 88% confidence of long-run +ROI, but I'm 42.8% straight up. What am I missing?

Spent the last few months building a probabilistic prediction model for NBA and MLB game outcomes. Standard hobbyist stack: Elo + recent form + injury drag + pitcher-level priors for MLB + line-movement signal + per-sport calibration shrink. Outputs a calibrated p(side wins) for each market.

Yesterday I finally ran proper validation on 421 settled picks and the result is interesting enough I want to ask for methodology critique.

**The headline tension:**

* Raw hit rate: 42.8% (n=421, Wilson 95% CI [38.1%, 47.5%])

* Sounds bad. Standard -110 breakeven is 52.4% so naive read is "model is losing."

* But mean decimal odds taken is 2.94 (model picks a lot of dogs and small parlays), so actual mix breakeven is 42.4%.

* Bootstrap on actual P/L (1000 resamples, 1u stakes): mean ROI +8.6%, 95% CI [-5.4%, +22.4%], P(ROI > 0) = 0.885.

Per sport:

* MLB n=322: hit_rate 44.7%, breakeven 43.9%, bootstrap mean ROI +6.65%, P(>0) = 0.798

* NBA n=94: hit_rate 38.3%, breakeven 37.9%, bootstrap mean ROI +19.94%, P(>0) = 0.851

So the bootstrap is saying long-run +EV is more likely than not, but I'm at the sample size where confidence intervals on ROI still cross zero. The "I'm losing because hit rate is below 50%" naive read is misleading because the bet mix has different breakevens.

**The validation finding (the actual question):**

I bucket every pick into confidence tiers based on (model_p, fanduel_edge). The CLV-aware data on the top tier surprised me:

* Top tier (n=108 settled, 5 with closing-line data): 100% beat the closing line, +21.27pt avg CLV, +24.56% bucket ROI

* Middle tier (n=199, 19 with CLV): 73.7% beat-close, +1.46pt avg CLV, +8.06% ROI

* Auto-parlay tier (n=86): 25% hit, -18.81% ROI. This is broken. Generation thresholds were too loose.

The high-confidence tier is doing real work: 100% beat-close (small sample but consistent direction) plus +21pt CLV says the model is picking the sharper side of the market on its strongest signals. The auto-parlay tier is hemorrhaging because parlay miscalibration compounds multiplicatively while my per-sport calibration shrink is tuned for singles.

**What I'd love methodology feedback on:**

  1. **Per-tier-vs-parlay calibration.** I shrink model_p toward 0.5 based on per-(sport, market_type) historical hit-rate gaps. Singles are well-calibrated. When I multiply N calibrated leg probabilities to get a parlay prob, miscalibration compounds and the parlay prob is consistently overstated. Has anyone solved this cleanly: leg-level Platt scaling tuned specifically for parlay use, hierarchical Bayesian per-leg priors, something else?

  2. **CLV stamping coverage.** I currently have closing-line data on only 24 of 421 settled picks because the snapshot loop wasn't reliably running for the first months. Going forward every new pick gets stamped automatically. Should I weight calibration adjustments toward CLV-validated rows even at small n, or wait for more data?

  3. **Bootstrap interpretation.** With P(ROI > 0) = 0.885 and 95% CI crossing zero, what's the responsible way to communicate this externally? "Probably profitable" feels honest but is harder to falsify than a Sharpe-style number. Curious how people working on similar discrete-outcome prediction systems frame their confidence.

Open-book journal where every pick before kickoff is logged and graded automatically against ESPN's scoreboard. Happy to share the link in a comment if useful for context; not the point of the post.

reddit.com
u/mangoman40114 — 1 day ago

Calibration loop + open-book journal for sports outcome prediction. 421 picks in, +21pt CLV on top tier. Methodology critique welcome.

Spent the last few months building a probabilistic prediction model for NBA and MLB game outcomes. Standard hobbyist stack: Elo + recent form + injury drag + pitcher-level priors for MLB + line-movement signal + per-sport calibration shrink. Outputs a calibrated p(side wins) for each market.

Yesterday I finally ran proper validation on 421 settled picks and the result is interesting enough I want to ask for methodology critique.

**The headline tension:**

* Raw hit rate: 42.8% (n=421, Wilson 95% CI [38.1%, 47.5%])

* Sounds bad. Standard -110 breakeven is 52.4% so naive read is "model is losing."

* But mean decimal odds taken is 2.94 (model picks a lot of dogs and small parlays), so actual mix breakeven is 42.4%.

* Bootstrap on actual P/L (1000 resamples, 1u stakes): mean ROI +8.6%, 95% CI [-5.4%, +22.4%], P(ROI > 0) = 0.885.

Per sport:

* MLB n=322: hit_rate 44.7%, breakeven 43.9%, bootstrap mean ROI +6.65%, P(>0) = 0.798

* NBA n=94: hit_rate 38.3%, breakeven 37.9%, bootstrap mean ROI +19.94%, P(>0) = 0.851

So the bootstrap is saying long-run +EV is more likely than not, but I'm at the sample size where confidence intervals on ROI still cross zero. The "I'm losing because hit rate is below 50%" naive read is misleading because the bet mix has different breakevens.

**The validation finding (the actual question):**

I bucket every pick into confidence tiers based on (model_p, fanduel_edge). The CLV-aware data on the top tier surprised me:

* Top tier (n=108 settled, 5 with closing-line data): 100% beat the closing line, +21.27pt avg CLV, +24.56% bucket ROI

* Middle tier (n=199, 19 with CLV): 73.7% beat-close, +1.46pt avg CLV, +8.06% ROI

* Auto-parlay tier (n=86): 25% hit, -18.81% ROI. This is broken. Generation thresholds were too loose.

The high-confidence tier is doing real work: 100% beat-close (small sample but consistent direction) plus +21pt CLV says the model is picking the sharper side of the market on its strongest signals. The auto-parlay tier is hemorrhaging because parlay miscalibration compounds multiplicatively while my per-sport calibration shrink is tuned for singles.

**What I'd love methodology feedback on:**

  1. **Per-tier-vs-parlay calibration.** I shrink model_p toward 0.5 based on per-(sport, market_type) historical hit-rate gaps. Singles are well-calibrated. When I multiply N calibrated leg probabilities to get a parlay prob, miscalibration compounds and the parlay prob is consistently overstated. Has anyone solved this cleanly: leg-level Platt scaling tuned specifically for parlay use, hierarchical Bayesian per-leg priors, something else?

  2. **CLV stamping coverage.** I currently have closing-line data on only 24 of 421 settled picks because the snapshot loop wasn't reliably running for the first months. Going forward every new pick gets stamped automatically. Should I weight calibration adjustments toward CLV-validated rows even at small n, or wait for more data?

  3. **Bootstrap interpretation.** With P(ROI > 0) = 0.885 and 95% CI crossing zero, what's the responsible way to communicate this externally? "Probably profitable" feels honest but is harder to falsify than a Sharpe-style number. Curious how people working on similar discrete-outcome prediction systems frame their confidence.

Open-book journal where every pick before kickoff is logged and graded automatically against ESPN's scoreboard. Happy to share the link in a comment if useful for context; not the point of the post.

reddit.com
u/mangoman40114 — 1 day ago