Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is not exactly correct. Both methods you mention here are frequentist. The first is called maximum likelihood and the second is hypothesis testing.

What a Bayesian does is not ill-defined. On the contrary, what a Bayesian does is two steps: (1) determine a prior (2) follow the math mechanically. What frequentists do when they do inference is actually much more ill-defined, as it required hidden assumptions intermixed with a mathematical derivation instead of cleanly separating assumptions from maths. In principle the bayesian way is: given assumptions, simply follow the axioms of probability to their logical conclusions. Of course that may not always be computationally feasible, so approximations have to be made sometimes. Frequentists require divine inspiration to develop a new method for each kind of problem.

The bayesian way to solve the problem would be to take a prior probability distribution on the probability of heads, and then condition that on the data that the coin ended up on heads 6 times and on tails 4 times. For example with a uniform prior, we get this posterior: http://www.wolframalpha.com/input/?i=1*p%5E6%281-p%29%5E4%2F...

As your can see the peak is indeed at p=0.6, but there is probability mass around it as well.

Replace the factors of 1 with your favorite prior to see the effect (I couldn't convince wolfram alpha to compute a beta distribution in place of the 1, so you'll have to try that yourself).



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: