Hacker News new | comments | show | ask | jobs | submit login

I can imagine a number of situations where the implementation is significantly more complex. While ideally A/B tests should be looking at relatively small changes, where each change is independent, many times people are making profoundly larger changes.

If you are testing the conversion rate in shopping carts, and the changes involves drastic redesigns of the flow through the shopping cart process, that could be a serious technological difference and requires substantial time to implement.

Not every test is as easy as changing the copy on an email.




Even if you're making larger and more complex changes, the overhead of your testing methodology remains the same. That is how you measure things should be a fixed (small) effort, The cost of building the test is whatever the test is.

In other words multi-armed bandit versus A/B test is something that you shouldn't be deciding based on the effort of the testing methodology.


I don't think he was referring to the technology behind the A/B test itself, but rather the technology behind the change that was being made.

That's how I interpreted his statement. I agree with you that the actual A/B testing overhead should be minimal and fairly trivial to put into place.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: