Hacker News new | past | comments | ask | show | jobs | submit login

> Stopping a trial early and it's statistical implications should, by the way, be somewhat familiar with web developers: it's commonly done with A/B tests

AFAIK stopping an A/B test early is the way to go if you want to convince yourself (or your customer) that something has an effect, even though it hasn't.




Yes, if you "peek" at the results of an A/B test before it's done in order to decide whether to stop early, the numbers you "peeked" at have much lower statistical power than if you foreswore stopping early. Obviously, failing to take that into account when drawing conclusions about the effectiveness of the treatment is a colossal mistake.

However, the decrease in statistical power is still quantifiable, and with the right math you can still calculate an accurate 95% confidence interval on the effect size (which will be much wider than the wrong math where you naively don't account for the "peeking"). And of course, it's totally possible that the treatment is so effective that even the accurate calculation shows that the lower bound on the effect size is so much higher than the control group that the responsible thing to do is to stop the trial early.

Here's some the math on how much lower the statistical power is if you "peek": https://www.evanmiller.org/bayesian-ab-testing.html




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: