I'm concerned that the assumptions that are necessary for those statistical tests are not being met. Generally the significance tests are based on an assumption that all samples are independent as far as I know, but MAB grossly violates that assumption, and I do mean grossly. MAB doesn't pass those tests because it is, itself, a statistical significance test in some sense, and it is "deliberately" not feeding the independence-base significance algorithms the data it thinks it should be getting. This is not a bug, it's a feature.
(Pardon the anthropomorphizing there. It's still sometimes the best way to say something quickly in English.)
In fact, the fact that you agree that MAB has a higher conversion rate, which in this context basically means nothing more and nothing less than works better, but that there's this measure Q on which it does worse, is probably better interpreted as evidence that Q is not a useful measurement, rather than that the thing that works better shouldn't be used due to its lack of Q-ness.
Also, I don't see the problem of lack of independence. People are still randomly assigned to one of the conditions in the MAB scheme, people in one sample don't affect people in the other sample, and each person submits one independent datapoint. The problem is just one of unequal sample sizes in your two conditions, so when you make the comparison (in a one factor ANOVA, say), you lose lots of statistical power because your effective sample size is essentially the harmonic mean of the two sample sizes (which is heavily biased towards the lower number of the two).
No, they're only partially randomly assigned to one of the conditions. MAB has a memory based on what the results of the previous runs are, which is basically another way of phrasing that there's a dependence.
By contrast, randomly allocating X% of your population to cohort C_i is not a dependence on anything but the free parameter (X).
If the assignments are the source of the problem, you can't just take them as "given" and assume that things are OK.
A simpler example: if I assign people to cohorts based on their age, then the results of my experiment may be "conditionally independent" with respect to (say) eye color, but it probably won't be conditionally independent with respect to income or reading level or pant size. In other words, the assumption of independence is violated with respect to all variables correlated with age.
With bandit optimization, our bucket allocation scheme is correlated with time, which makes it nearly impossible to apply conventional statistical tests on any metric that may also be correlated with time. And in web testing, that's nearly everything.