

Don't fool yourself with A/B testing - squiggy22
http://blog.whatclinic.com/2010/09/don%E2%80%99t-fool-yourself-with-ab-testing.html

======
RBr
This grain of salt article about A/B testing is smart. Specifically, this is
something that I learned too:

"Do user testing. Get people in and ask them to use your product. You’re going
to get a lot more information a lot faster and have a higher degree of
confidence in the results."

Expanding on this, I think that it's important not just to rally a group of
your friends or other webheads. The closer that you can get to assembling a
group of target users, the better.

On ecommerce sites with a checkout, one of my favorite tricks is to send a
manual e-mail to a handful of people who have become customers. Clear and
simple, I say that I'd like to talk to them about the checkout process,
mention that there will be 3-5 other customers on the conference call and
offer them a significant coupon (25% +free shipping or something) to
participate. During the call, I talk to them about their shopping experience
and show them a revision or two of different designs. I ask them to (fake) buy
a product using the revisions. The feedback (especially with early designs) is
consistently some of the most valuable I receive and it's surprising how much
of a positive effect talking to people has. With one phone call asking for
feedback, you often gain a repeat customer.

------
patio11
If you don't understand how A/B testing works, and are unwilling to read the
clear explanation on the dashboard telling you that your results are not
significant, your problem is in your chair, not with A/B testing.

------
jyothi
The blog reads don't do A/B tests for "small changes". On the contrary I
actually think A/B tests over large number of users work great for small
changes than user testing with a small set.

Many users cannot articulate the benefit of the small change, eye tracking or
getting mental map of what the user thinks is very hard, with small set of
users concluding something might be so skewed a decision.

Additionally the blog is misguiding >> After 11,000 tests and 400 conversions
it clearly showed that the instructions made a 30% difference.

Below are the actual A/B test numbers from the blog:

Original: 193/5943 => 3.24%

Combination 1: 236/6058 => 3.89%

1\. quoting this as 30% improvement is wrong. Google only said if you use
combination 1 you would see 20% higher leads.

2\. Since the difference between the conversion numbers is not very high one
has to run it for a very long time and even on the first instance combination1
was no better than original.

Given 1 & 2 Google did mark the results as inconclusive telling us not to be
fooled.

Edit: rephrased

------
aresant
A crucial element left out of the discussion with A/B testing is the
importance of user segmenting.

You'll see drastic swings by the day of the week, male / female users, traffic
sources, entrance paths, browser, region, etc.

A/B tests can be terribly misleading without awareness of such factors.

A simple illustration of this is eMail response rates which show that Sunday's
are far and away the best day to get people's attention (highest aggregate
open / click rate by 20% over worst day of week).

But inversly we've found through the experience of managing thousands of tests
that while opens & clicks are high, purchase conversion rates are on the low
end of the scale.

Just food for thought - it's absolutely positively rolling up your sleeves and
deep diving into the data and segmenting tests or at least being aware of
external influences before you take your results to the bank.

[http://www.mailermailer.com/resources/metrics/daily-
rates.rw...](http://www.mailermailer.com/resources/metrics/daily-rates.rwp)

[http://conversionvoodoo.com/blog/2010/07/personalizing-
your-...](http://conversionvoodoo.com/blog/2010/07/personalizing-your-email-
subjects-can-drop-your-conversion-rate/)

~~~
dillydally
<http://en.wikipedia.org/wiki/Simpsons_paradox>

~~~
aresant
Had not seen that thank you!

