
Start here: Statistics for A/B testing - guico
https://medium.com/@iamguico/start-here-statistics-for-a-b-testing-5f5c7e02ce1e#.ih7a9mt6z
======
yummyfajitas
I like this article a lot. But there is one thing that it gets a bit wrong.

The article is discussing the standard textbook Z-test. The article then talks
a lot about Optimizely. However, Optimizely doesn't actually use the Z-test -
they have a sequential testing method instead, and the details are a bit
different.

The article also suggests "start by serving variant B to only 10% of the users
to ensure there are no implementation problems". This is a good idea, but once
you've ensured there are no integration problems you need to throw away the
data and restart. Since conversion rates change during the week (i.e., sat !=
tues), keeping the data during the ramp-up period is a great way to get wrong
results due to Simpson's Paradox.

~~~
guico
Hi Christ, thanks for the remarks - I'll add them to the article as an edit.
Thumbs up!

------
ep103
Would love a non-optimizely, guide to building proper A/B tests, if anyone
knows one

~~~
yummyfajitas
I've got some documents up describing the Bayesian approach to testing:

[https://www.chrisstucchio.com/pubs/slides/gilt_bayesian_ab_2...](https://www.chrisstucchio.com/pubs/slides/gilt_bayesian_ab_2015/slides.html)

[https://cdn2.hubspot.net/hubfs/310840/VWO_SmartStats_technic...](https://cdn2.hubspot.net/hubfs/310840/VWO_SmartStats_technical_whitepaper.pdf)

In my view this is the most comprehensible approach to A/B testing, and it
answers the questions people really want answered.

Disclaimer: I'm the director of data science at VWO and this is the approach
our new A/B testing engine uses.

