Why do so many applied papers still report p-values without effect sizes, and does anyone actually find p-values alone useful?
I review a fair amount of applied quantitative work and I keep running into the same pattern: tables full of p-values and significance stars, but no standardized effect sizes, no confidence intervals around the estimates, nothing that tells you whether the effect actually matters in practice. A regression coefficient of 0.002 with p < 0.001 tells me the sample is large, not that the effect is interesting.
I know the ASA put out a statement on this years ago, and I've seen plenty of arguments for reporting effect sizes. But the practice hasn't really changed in a lot of fields. Is there a reason people still find p-values alone informative? Or is it just institutional inertia at this point - reviewers expect stars, so authors provide stars?
u/PLogacev — 4 days ago