

Ask HN: How many people needed for effective A/B testing? - gs7

Hi HN,<p>I run a review website (similar to Yelp) for a niche business market. Currently, business owners can manage their listings for free, but I would like to make some money now by offering additional features. I don't know if I should offer one premium tier for say $29 a month with all the features, or multiple tiers ranging from $19 to $39 a month depending on what features they want (think: premium, platinum, ultimate).<p>Before I create any of these features, I'm going to see if my current customers are even interested in paying for them. So I was thinking of doing A/B testing, where half of my customers would be informed about a one tier structure, and the other half about the multi tier structure. But here's my problem: I only have 170 people who are currently signed up to manage their business listing. Is it worth doing A/B testing with such a small population? Or should I just pick one option and go with it, hoping for the best?<p>TL;DR: Is having 170 people too small of a population to do effective A/B testing?<p>Thanks!<p>EDIT: changed the wording to use correct statistics terminology.
======
drnex
i would say, it depends on the results, if from the 170 you get a contundent
answer then I would say its a go, if results are not contundent enough you can
keep your a/b running until it does

------
drnex
you should ask a statistian tho

------
fleitz
Well if all you have is 170 people then you're not sampling but testing the
population.

I'd go with the A/B testing option just to get more experience with it. Having
data at a low confidence interval is better than having no data at all. Maybe
it's a wash maybe it's actually very important. Worst case is there is no
overwhelming winner and you're back where you started.

Use the following link to determine the sample size you need.
<http://www.surveysystem.com/sscalc.htm>

~~~
gs7
Thanks for the link and the correct terminology (I updated my post).

~~~
fleitz
To expand it really depends on what the results of the testing are:

If you split your 170 customers into two groups of 85, if 40 of 85 pick A and
5 of 85 pick B then it's extremely likely that A is the better choice. If 39
pick A and 40 pick B then you'd need a larger sample size to overcome the
margin of error. Of course at a 95% confidence interval any result even in
excess of the MOE is pure chance 1 of out 20 times.

What I'd do is split your customer base into 3 groups of 50% 25% and 25%, test
each 25% and then apply the winner to the 50%.

~~~
gs7
That's a great suggestion, thanks a lot!

