Hacker News new | comments | show | ask | jobs | submit login

So am I getting this right?

(Edit) So the two steps to run are:

1. Run a traditional A/B test until 95% confidence is reached. This is full exploration.

2. Then, switch to the MAB after that, showing the better performing variant most of the time. As time increases, the display of the worse performing variants decreases.

Option 1 will NOT give you the correct answer. You CANNOT use confidence intervals as a stopping criteria. If you do this, you end up running many tests, and then you need to apply a multiple test correction to account for this. Otherwise you run a VERY HIGH risk of picking the wrong result.

I emphasize, because this is a common problem made by A/B test practitioners. For a fuller discussion of the problems, check out the papers by Armitage (frequentist) and Anscombe (Bayesian) on the topic. Or see my summary of the issue here:


Sorry I wasn't clear. I meant run #1 first, then run #2. I didn't mean them as different options.

This article suggests another option: Run UCB1 (a specific MAB variation) from day one and you'll get benefits of MAB without limitations of A/B testing or mathematical fallacies of the prior 20 lines of code MAB for n-cell tests.

I thought the article was suggesting that step #1 is part of UCB1?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact