
A/B Testing: How Much Data Do You Need?: Blog: Fuel Interactive - RexDixon
http://www.fuelinteractive.com/blog/2008/10/ab-testing-how-much-data-do-yo.cfm
======
aresant
If you're interested in this post, the real deal to knowing how much data you
need to determine if a test is a valid winner is best measured by standard
deviation.

Detailed explanation of this in these posts below:

[http://www.conversionvoodoo.com/blog/what-is-ab-and-
multivar...](http://www.conversionvoodoo.com/blog/what-is-ab-and-
multivariable-testing/)

[http://blog.joshbaker.com/2009/01/21/standard-deviation-
and-...](http://blog.joshbaker.com/2009/01/21/standard-deviation-and-
marketing-how-why/)

[http://snaphawk.blogspot.com/2009/07/how-does-google-
website...](http://snaphawk.blogspot.com/2009/07/how-does-google-website-
optimizer.html)

------
jfarmer
Not much content in the article, and they don't even answer the question
(which involves math).

How much data do you need? This depends on two things: the expected effect
size, and your confidence interval.

With enough data you will ALWAYS get statistical significance. The tighter
your confidence interval the more data you need, and the smaller your expected
effect size the more data you need.

For example, these are two different questions: 1\. What is the likelihood
that the observed difference between the test and control candidates is real?
2\. What is the likelihood that the test candidate is at least a 20%
improvement over the control candidate?

If you're swinging for the fences and need 10-50% improvements in your
metrics, you can shut down tests early that prove unlikely to generate those
kinds of returns.

The "usual" way of doing things is to let the A/B test run until you reach
statistical significance, regardless of effect size. But unless you're Google
or Facebook, spending 100,000 impressions to get your 1% improvement is
probably not worth it.

