

Kayak's Most Interesting Mobile A/B Test - nancyhua
http://apptimize.com/blog/2014/03/kayaks-most-interesting-ab-test/

======
smackfu
Interesting. I wonder if the "SSL/TLS" part matters, or if you could get the
same result with "XRK/QNQ" or other random strings.

------
lalos
Makes sense, the interests of your target market are different. Some apps will
have users that are tech savvy that will be motivated by that message and
other apps will simply not care or will be confused about reading the
"SSL/TLS" reminder discouraging them to make the purchase. Anyways this is the
beauty of the A/B test, to test things and see what works better in specific
cases and not to make generalizations about behaviours.

~~~
grrowl
It also hinges heavily on how you're tracking these A/B results. There's less
certainty and value in simply saying "Group A ended up booking more" — if
metrics indicated a statistically higher percent of users complete the
purchase at that particular page with the single change on it.

Now that we know it worked for Kayak and not another company, we can delve
into _why_ to further inform everyone's future decisions. IMO a padlock with
more pure language would do better, but only another A/B test by Kayak or
similar company would validate this.

------
kvee
I wonder if Vinayak works for Kayak because his name is so similar.

-[http://andrewgelman.com/2005/08/05/dennis_the_denv/](http://andrewgelman.com/2005/08/05/dennis_the_denv/)

------
raldi
Can someone give a summary of what the A/B test was and why it was
interesting?

~~~
pwenzel
The second paragraph of the article is paired with screenshots that document
the test very clearly. Here's a summary:

1\. Kayak added a small message saying "SSL/TLS encrypted payment" to the
bottom of the form.

2\. In Kayak's test, people tended to book less when the message was not
included.

3\. According to the interview, this contradicts what other people on the
internet say.

4\. Conclusion: "Just because something worked for someone else doesn’t mean
it will work for all apps. You have to try it out yourself."

~~~
rralian
I think the crux of the question is why this was so interesting, and I would
say it's just not. It wasn't surprising or interesting.

------
a_bonobo
What's the point in running A/B tests if you don't report sample sizes,
assumptions, statistical test used and p-values/F/whatever received? For all I
know, these results are completely random and useless.

I mean, there's even excel spreadsheets for that so you can't say it's too
hard: [http://visualwebsiteoptimizer.com/split-testing-blog/ab-
test...](http://visualwebsiteoptimizer.com/split-testing-blog/ab-testing-
significance-calculator-spreadsheet-in-excel/)

