
There is "art in machine learning"
A lot of people talk about ML like there is one correct way to build a model.
Everyone treats logloss, Brier, backtesting as God. This is my attempt to challenge it a bit.
These Substacks are a free read, I have no intention of making money from them, touting or grifting. I write them mostly for myself as I find it helps me think. But selfishly I learn a lot writing them and when people push back respectfully, not to mention the connections/dm's I get from these.
Thought some of you enjoy.
https://open.substack.com/pub/thequantativegambler/p/the-art-of-player-strength-models
High level I think I'm trying to communicate that two models can have the same MAE, R², etc. whatever you judge them on, but be completely different models in practice depending on communication/other objectives