Hacker News new | past | comments | ask | show | jobs | submit login

This ignores what I have seen in my experience, which is that marketing teams - composed of the people who dictate what A/B tests the business should run - have little to no background in statistics, let alone any interest whatsoever in actually performing legitimate A/B tests.

It's often the case that the decision maker has already decided to move ahead with option A, but performs a minimal "fake" A/B test to put in their report as a way to justify their choice. I've seen A/B tests deployed at 10am in the morning, and taken down at 1pm with less than a dozen data points collected. The A/B test "owner" is happy to see that option A resulted in 7 conversions, with option B only having 5. Not statistically significant whatsoever, but hey let's waste developers' time and energy for two days implementing an A/B test in order to help someone else try to nab their quarterly marketing bonus.




Join us, comrade, in the fight against the statistical blight!

Move your decision process to multi-armed bandit and you never have to decide when to end an A/B test -- math does it for you, in a provably optimal way.


I'm not sure this solves it because you have to have a really strong sense of the loss function to pull it off. That's much easier to intuit and use to guide experiments than actually build into the bandit algo.


> That's much easier to intuit and use to guide experiments than actually build into the bandit algo.

IDK about your intuition, but for most other people, it gets in the way of statistics.

The "loss function" is just as easy to calculate for A/B tests as for multi-armed bandit. The value of user doing A is $X, the value of B is $Y, and the value of C is $Z.


You gave me some reading to do. :) Thank you.


But it's DATA SCIENCE. You know it's SCIENCE because they called it SCIENCE.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: