

Show HN: The Whicher: A/B test the Real World - jiaaro
https://www.thewhicher.com

======
burgeralarm
This is way cool, though I can't help but think that the results might differ
from actual A/B testing because the testing context is explicit. Have you done
comparisons to backend A/B testing to see how well the two align?

It would seem that traditional A/B testing allows you to see what actually
converts, while this framework would be biased towards user preference--which
doesn't necessarily imply conversion. For example, I think Amazon's site is
ugly and busy, and given the choice between that layout and a cleaner one, I'd
probably choose the cleaner. That said, there's no way they haven't tested the
hell out of the home page and discovered that a busy page, though uglier,
converts better.

~~~
jiaaro
That's true and that's part of the reason I built it. It's not intended to be
a replacement for A/B testing, but a compliment.

A/B testing is great for lots of things, but it's not always the right tool
for the job.

I use the term A/B testing loosely since it's obviously not actual a/b
testing. It's a hybrid of a/b testing and a survey that allows you to ask a
question an get much higher resolution, and test many more variation than a
typical survey question by using some techniques from A/B testing

------
btilly
I know more than a bit about statistics and A/B testing. My first reaction is
that asking for more information from a limited sample set does not actually
give you the power to do more powerful statistics. Claiming otherwise is a bad
sign.

Secondly a lot of what A/B testing gets at is subconscious reaction, not
conscious reactions. Consciously when we're asked a question and are paying
attention, we are poor judges of how we're going to react to, say, a larger
button to click on. But when we're actually using the site, we notice and hit
the larger button.

And thirdly, if you're actually going to seek out a small sample of people for
detailed information capture, do it in person. I can't stress this enough. If
you think up questions in a vacuum it is hard to ask the right questions. That
is part of what makes A/B testing so frustrating. But if you have a
conversation and are observant, you'll quickly generate good ideas.

~~~
jiaaro
I agree that sample size is important and that's why you still need 200+
responses.

It's a hybrid between a survey and an A/B test. You ask a question, but you
get to find out about more than just the one variation that the person is
served up.

I think it's fair to say that there _is_ more info to be gathered from each
individual than time-on-page, scroll behavior, and converted-or-didnt

If you like, you can think of this as a way to get more resolution from a
survey-typed question

------
marbemac
Interesting take on a/b testing. A small suggestion - it would be nice if
clicking on the buttons after the text "Which icon do you prefer?" actually
demonstrated how the service worked. It took me a while to realize that you
need to click on the 1-2-3-4-5-6-7-8 steps to go through the process (they
don't look clickable).

~~~
jiaaro
Thanks for the feedback!

I have it auto-advancing from 1 to 2, which I thought would demonstrate that
it was interactive, but I guess it still needs work :/

~~~
marbemac
Yeah I noticed that. My immediate reaction was to click the buttons below the
"Which icon do you prefer" question though, since that's how I assume the
service would work. I didn't think to click the steps until after I had tried
clicking everything else.

Could just be me though, maybe you should a/b test it ;)

------
simmuji
I like this hybrid of survey + A/B testing. Interesting approach. I agreed
with marbemac, I also confused with the demonstration's UI on your homepage.

------
mikpoz
Just checked out the site--love the survey-A/B concept. Is this a pivot for
RootBuzz or another project?

~~~
jiaaro
It's another project

------
jiaaro
Hey guys, I created this tool and I'll be around answering questions
throughout the day!

------
ninos
A/B testing is very good.

