Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Re-reading, I realize that I was as clear as mud here about interaction vs interaction.

The first paragraph is talking about random interaction. So, for instance, version A of test 1 was really good, and version B of test 2 got more A's from test 1 than version A of test 2 did. This gives version B a random boost. As long as things are random, it is OK to completely ignore this type of random interaction from the fact that you are running multiple tests on the same traffic.

The second paragraph is talking about non-random interactions. People who are in version A of test 1 and also in version B of test 2 get a horrible interaction that hurts both of those. If you have reason to believe that you have causal interactions like this, you can't ignore it but have to think things through carefully.



Hmm, thank you - that's really very interesting indeed. I'd not thought of doing A/B tests like that. Thank you very much!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: