The Bayesian approach to A/B testing gives an interesting example of how frequentists and Bayesian approaches can differ.
A frequentist approach tries to limit the probability that a test setup will accept a 'false' result, one that could simply arise by chance.
A Bayesian approach actually calculates the probability that a test result could occur 'by chance'. You can then stop the test at any point and be sure you only accept <x% of results that could occur by chance, by the power of expectation values you never breach the x% limit no matter how often you 'stop' the test.
The interesting thing is that while these would seem to be very similar, there actually isn't anything stopping the Bayesian approach from accepting any test eventually. Giving it 0 statistical power in the frequentist sense. The only thing the Bayesian approach ensures is that for any 'false' test you accept after time T there are many more that will keep running.
The Bayesian stance is that you should not care. The frequentist stance is that a test that has a p-value of 1 is the worst possible.
My stance is that you should know why to care about either. Oh and that the thing you're calculating an expected value off should somehow contribute linearly to your profits/costs, averages do strange things to nonlinear functions.
Eh, even an expected value that's linear with respect to profits can end up with strange results like the St. Petersburg paradox. In general, naively maximizing it breaks down at the point where you stop being insensitive to the possible risks.
If happiness is, e.g., log(money), then you can just adjust the game to have the payout be $2^2^n after the nth step. This cancels out the logarithm and recovers the paradox. The only way to get out of it with diminishing returns is to have happiness reach a finite asymptote.
When there is a decent probability that you may crash the financial system I'm nor even sure if a strictly monotonically increasing function is appropriate.
A frequentist approach tries to limit the probability that a test setup will accept a 'false' result, one that could simply arise by chance.
A Bayesian approach actually calculates the probability that a test result could occur 'by chance'. You can then stop the test at any point and be sure you only accept <x% of results that could occur by chance, by the power of expectation values you never breach the x% limit no matter how often you 'stop' the test.
The interesting thing is that while these would seem to be very similar, there actually isn't anything stopping the Bayesian approach from accepting any test eventually. Giving it 0 statistical power in the frequentist sense. The only thing the Bayesian approach ensures is that for any 'false' test you accept after time T there are many more that will keep running.