Hacker News new | past | comments | ask | show | jobs | submit login

Using the techniques described on my blog [0], which are ideal for KPIs like conversion rates and small sample sizes (since no Gaussian approximation is made), I get a p-value of 0.177, which is not significant. The observed treatment effect is a 36.8% lift in conversion rate, but a confidence interval on this effect has endpoints -10.7% and +97.4%. Anything in that range would be considered consistent with the observed result at a 0.05 significance threshold.

With 6000 total impressions and a 50/50 split, the experiment is only able reliably to detect a 74% lift in conversion rate (with power = 80%).

If you want to rigorously determine the impact, decide what effect size you hope to see. Use a power calculator to decide the sample size needed to detect that effect size. Administer the test, waiting to acquire the planned sample size. When analyzing, be sure to compute a p-value and a confidence interval on the treatment effect.

[0] https://www.adventuresinwhy.com/post/ab-testing-random-sampl...






Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: