For me, I really appreciate the Bayesian approach because it makes it very explicit that you pick a prior.
Perhaps my experience is limited, but every (supposedly non-Bayesian) model I've used in practice has been possible to re-express using Bayesian terms, priors and beliefs and so on. Then I get to look at the intitial assumptions (model/prior) and use suitable human hand-wavey judgement about whether they make sense.
Bayes is a good way to _update_ models, but if you lose sight of the fact that the bottom of your chain of deduction was a hand-wavey guess, you're in trouble.
But you don't, in general, pick a prior. You pick a procedure that has an expected loss under various conditions. It's one player game theory.
If you happen to have a prior, then you can use it to choose a unique procedure that has minimal expected risk for that prior given the loss function, but even so that may not be what you want. For example, you may want a minimax procedure, which may be quite different from the Bayes procedure.
Minimax still requires a probability distribution, which means you need a prior.
Edit: Based on the downvotes, I see my audience is not convinced. I'll repeat an explanation I posted a while ago. Probably should make this a blog post because I see this claim quite often. I'd love to know what book you read it in.
--
In minimax regret, you have a set of available decisions D, and a set of possible states of nature N, and a utility U(D,N). Each state of nature also has a probability P(N) (which can be influenced by the decision too in some problems).
States of nature include "interest rates rise 1%", "interest rates fall 1%", and "interest rates stay the same". Decisions include "invest in stocks" and "invest in bonds".
Minimax regret proposes to ignore the probabilities P(N), instead suggesting a way to make a decision purely based on the utilities of the outcomes. But that is actually an illusion.
Outside of math class word problems, we don't have N or U(D,N) handed to us on a silver platter. There is always an infinite range of possible states of nature, many of which have a probability approaching but never reaching zero, including states such as "win the lottery", "communist revolution", and "unexpected intergalactic nuclear war".
In commonsense decision-making we don't include those states of nature in our decision matrix, because our common sense rules them out as being implausible before we even think about our options. You wouldn't choose to invest in bonds just because stocks have the most regret in the event of a communist takeover.
So what actually happens is we intuitively apply some probability threshold that rules out states of nature falling below it from our consideration. Then we minimize max regret on the remaining "plausibly realistic" states of nature.
Humans are so good at doing probability mentally that this step happens before we even realize it. But if you are writing code that makes decisions, you'll need to do it, and so you'll need to have at least a rough stab at the probability distributions.
Conceptually you are right: all mathematical models have assumptions, including assumptions about their scope of applicability.
But you are redefining "prior" to refer to all the assumptions of the model, and not its usual meaning as the prior distribution used in Bayes calculations.
Prior distribution is P(N|I), where I is the background information you have such as "historical interest rates in the USA looked like this", and "communist revolutions occurred 6 times in the 20th century" (made-up number). I is not itself the prior.
For this investing example, it's also the only information we have, unless we're trying to update on something like a central bank announcement. So our probability distribution over N is just the prior distribution.
When you're actually trying to make a decision, and not just solving a problem handed to you in math class, you can't avoid using P(N). You can either say "The minimax procedure requires knowing P(N) as an input, so that it isn't dominated by extremely improbable N", or you can say equivalently that "Minimax doesn't require P(N), but as an assumption of my model I'm ignoring all states of nature N with P(N) < y, then applying minimax regret over the remaining N".
I think we must be coming from two different communities of practice where the words don't quite line up. All the operational things you are saying I agree with. I just put them under different verbal categories.
"There is always an infinite range of possible states of nature"
Well, I think that is definitely and unambiguously false. The universe is not infinite, nor infinitely divisible, as far as we know, and the number of future states of any particular person (or humanity) are even smaller than those of the universe. Limits in time mean limits in space, and limits in space mean limits in particles and possibilities.
I'm not sure I can make a case that it matters, but if it doesn't matter, why say infinite?
I guess this is a tangent. First of all my point really doesn't hinge on the infinity; it can be finite (but really big) but regardless, whenever you apply minmax you must first crop your decision space to a probability threshold, or else you'll make nonsensical decisions based on what gives the best outcome if the sun should happen to explode.
But secondly, I think (although I'd happily concede if convinced otherwise) that the space of possible scenarios really is infinite, even if the observable universe is not. The space I'm talking about is not the actual state space of the universe, which in some interpretations of physics might be finite or even unitary. It spans the space of hypothetical universes that are all consistent with your information with nonzero probability, which I think is probably infinite, but again, if it's not infinite that's a technicality. If you include the states that have with zero probability (because why not? GGP was advocating that the probability is irrelevant to minmax decisions) then the space is definitely infinite, because even physically impossible states of nature will impact our decision making.
Another way to conceptualize the "cropping" is to get rid of all future states where planning would have been meaningless anyway.
We are momentary Boltzmann brains? We'll assume not, because if so, nothing really matters.
Trivial difference, but that avoids potentially difficult threshold problems and cousins of the St. Petersburg paradox or even Pascal's mugger, at the risk of being slightly more hand wavy.
Arguably an aesthetic distinction at this point, I generally think your description and approach are right.
I think it's sophistry to pretend we haven't any more idea of that since pre-Democritus. Thousands of years of science has shown that infinities are always a problem in our heads, with our theories. Does that prove they don't exist? No more than it's proven that the sun will come up tomorrow, I guess.
Just to be clear why this whole conversation thread is a TypeError, let's say I assign a probability of 99.9% to the hypothesis that the state space of the universe is finite, and 0.1% to the state space of the universe being infinite...
... In that case, how big is my hypothesis space about possible states of the universe?
There are plenty of examples of inifinities that are not problematic. Infinitely small wavelengths make our current understanding of physics break down, indeed. Or maybe infinitely divisible solids that lead to paradoxes like Banach-Tarski's. On the other hand, infinitely dimensional configuration spaces or continuous parameterization (e.g. coordinates, field strengths, phases) are trivial unoffensive parts of classical and quantum mechanics.
> Do you agree with Aquinas’ corollary, that absent an actual infinity, there must be some First Cause, which we call God?
Note that even if you agree with Aristotle’s position, which is essentially an arbitrary assumption, and the corollary that there must then be a first cause, there's nothing except the boat of being stepped in a particular religious tradition to suggest that the first cause should have any of the other traits of any particular concept of God. It works just as well to take the earliest known thing on the sequence of causes and say “this cause is uncaused”.
Aquinas' argument is that there can't be an actual infinity, so even though it appears that everything has a prior cause, it must be that there is something which is self-causing. "The Big Bang" qua event clearly didn't cause itself (events only cause events that are later in time), so the typical way to cash this out is "the Big Bang" qua set-of-physical-laws is self-causing.
This leads to a new problems (why this set of laws vs. some other), unless you posit that the laws are somehow perfect or necessary (which is essentially Deism), but the laws of our universe seem to be contingent (lots of unexplained physical constants).
You can make a metaphysically plausible case of a Big Bang-Big Crunch cycle that goes on forever, but then you're back to believing in an actual infinity.
> This leads to a new problems (why this set of laws vs. some other), unless you posit that the laws are somehow perfect or necessary
Those problems are only problems with the aesthetic preference that the universe be perfect or necessary. Once you accept that the universe can be without adhering to any such aesthetic preference, they cease to be problems.
So then the universe existing is a brute fact with no cause (as opposed to being self-causing).
You can do that but once you say there are facts without causes it's hard to know what you're signing up for. Why is this the brute fact and not something else? By definition, there is no answer (no cause) for that question. Uh okay, but if brute facts are possible, how can we do science at all? For all we know, we're just surrounded by brute facts and attempts to systemize facts into theories is just a waste of time because a new brute fact can just come along and bite you in the ass tomorrow. But I thought we only posited brute facts because science was pushing us in that direction by show us that there was a Big Bang, but now suddenly we're told "science is only contingently possible and sometimes just fails entirely due to the existence of brute facts".
It's not a satisfying intellectual stance, and if you really poke at it, it just feels like motivated reasoning in which the conclusion (there is no God) is leading the premises (some facts have no causes), not the other way around.
Heh, this conversation sounds like one I once had in university.
Here's the secret: Causality itself, that is, the notion that things have causes and A-->B (A causes B), is a concept that only makes sense within a system that has causal laws of physics and, in particular, a notion of ordering such as time. It applies to "event"-type objects such as A and B.
Since we humans live within a universe governed by causal physics, with a sense of order given by time and entropy, we observe events always having causes. But this is a property of events within our universe.
To ask whether the universe itself has a cause is a reasonable question, but to assert that it must have one, due to causality, is another TypeError. Universes, as a class of objects, are not governed by the same laws of physics as things within a universe. Time itself is in a sense a member variable of our particular universe, remember. So universes are not subject to causality any more than they're subject to gravity. Causality and gravity both apply to things within universes.
It might be that there are other laws that govern the formation and structure of universes. But we won't be able to infer very much about them by performing experiments within our universe.
To help you visualize this concept, think about a cellular automaton like Conway's game of life. That game has particular laws of physics, and can run on a PC. The evolution of the game state, though, is not closely coupled with the PC's environment. The PC can pause the game, or run it at 100x speed, or run it backward (if it has reversible laws; Conway's doesn't), but from the in-game perspective, it wouldn't be noticeable. Within the game, it would perhaps be possible to perform experiments to discover the governing rules of the cellular automaton, but there's not really many experiments that a one could do within the game to learn about how the PC works.
You can even build a Turing machine within the game, and have it run another kind of program. There wouldn't be a way for an AI within that program to distinguish that the Turing machine running it exists as a cellular automaton, as opposed to any other Turing machine, let alone to discover the PC at the upper level.
There's no reason to think that the same laws of physics apply at higher abstraction layers, and so it's entirely possible that our universe has no cause, because causality itself is an in-universe concept.
In my analogy, God (if used a meaningful term with all the cultural baggage that it carries) maps much more closely to John Horton Conway than to either the universe or the PC. I'm not asserting that Conway does or doesn't exist, merely that a Game of Life does not necessarily imply a Conway.
> You can do that but once you say there are facts without causes it's hard to know what you're signing up for.
Once you start inventing unjustified entities to satisfy an aesthetic preference for things (except the invented entities!) to have causes, you know what you are signing up for—a perception of reality driven by your desires rather than justified belief.
That the universe exists is a fact. The question of whether the existence of the universe has a cause may not be answerable, and there's a pretty good argument that it asking what the cause of the universe existing is itself is as incoherent as asking what the color of 1+1 is. To assert anything as a prior cause of the universe is to assert an entity outside of the universe, which is just equivocation because the “universe” in the question is the sum total of all existence.
> Uh okay, but if brute facts are possible, how can we do science at all?
Quite easily.
> For all we know, we're just surrounded by brute facts and attempts to systemize facts into theories is just a waste of time because a new brute fact can just come along and bite you in the ass tomorrow.
Of course, that's the fundamental nature of science. It's always contingent, but we build on what has observed predictive utility, because if there are any systematic rules, that's the only even loosely objective way to discern them. Accepting that scientific knowledge is inherently contingent doesn't prevent doing science.
> It's not a satisfying intellectual stance
Satisfaction is subjective; clearly, it doesn't appeal to your aesthetic preferences.
> and if you really poke at it, it just feels like motivated reasoning in which the conclusion (there is no God)
That's not the conclusion. Rejecting a particular argument for the necessity of a First Cause (which, while it gets abused as one, wouldn't be an argument for the necessity of anything much like the image of God it is used to justify even if it was valid on its own terms) isn't the same as denying the existence of God (I'm, as it turns out, a Catholic who quite firmly believes in God, so the distinction is not merely theoretical.)
> is leading the premises (some facts have no causes),
That's not the premise, either. Rejecting as unwarranted the assertion that all facts must have causes isn't asserting the existence of uncaused facts. (Though the argument from First Cause is asserting the existence of uncaused facts, so it's kind of odd for someone defending that particular God-as-brute-fact argument to mock the—imagined, but not actually real—premise of others that brute facts exist.)
Of course, St. Thomas Aquinas’ argument (and the similar though different previous effort at proving the logical necessity of God by St. Anselm) are the actual motivated reasoning in the debate, not the rejection of those arguments.
We already believe from quantum mechanics that there are random events that have no prior cause going on all around us. Even creation ex nihil seems to be going on constantly at the smallest scale, with virtual particles popping in and out of existence governed only by laws which ensure the conservation of energy.
So, whether correct or not, our scientific understanding already posits effects without causes. Ascribing no cause to the big bang itself is then not an extra assumption, so no contorted logic is necessary to get rid of the First Causer.
And related to your description about brute facts - while perhaps unsatisfying, I believe that it is how many scientists do perceive the world. We have a set of observations and we try to come up with the simplest set of laws that describe these observations, and test their predictive power on new facts as they come along. Sometimes, we discover that our set of laws had some hidden assumption that we were not aware of, such as the surprising facts about the speed of light being constant when measured from moving vehicles leading to the realization that Newton's laws of motions only hold for small enough speeds, and the need for general relativity to accurately describe what happens at a larger scale.
Even today, we have clear, well known gaps in our scientific understanding: the standard model only applies for matter at certain energy levels; quantum mechanics only applies at certain scales and can't take into account gravitational effects; we have mathematical singularities that come up when trying to describe black holes, which are unlikely to be physically correct; and we don't know what most of the matter in the universe is made of; and I could go on.
Scientific understanding is known to be contingent and any day a new unexpected fact could be observed, toppling our understanding of the laws of physics. That is a well known and inescapable fact.
All of this is not to mention that it still seems disingenuous to call the posited First Cause 'God', bringing to mind YHWH. I would be more inclined to accept the idea that there is some kind of transcendental First Cause (though, again, I don't think that is a particularly necessary concept), but you would need many more arguments to go from that to any particular conception of god.
Also, all of this discussion relies on certain assumptions about the real world and our ability to perceive it. For example, physics is also entirely compatible with the Hindu notion of Maya, that the world is an elaborate illusion, with everything we perceive actually being like drops of water in the ocean that is God, Brahman. Not to mention that even our understanding of logic is contingent. Perhaps there is some limitation of our biological brains that prevents us from seeing some fundamental flaws in our arguments.
All in all, my point is that it is not possible to obtain true certainty beyond any possible doubt on any topic. The best we can do is choose some base assumptions we believe in, and try to see what we can understand of the world starting from those. And if you chose God as one of those base assumptions, that is perfectly legitimate, and you can get a coherent model of the world that includes it. But you can't convince someone who has chosen pure empiricismaas their base assumption of the necessity of this base assumption.
It is confusing to me that there is talk of Bayesian statistics vs. frequentist statistics when both are often used in conjunction. The classic example of a medical test with false positives and false negatives and the prior being incidence in the general population comes to mind. To me that is not just an example of Bayes, but a combination of frequentist statistics with Bayes's theorem.
I also seem to recall that Bayes's theorem appears in a standard first year probability and statistics course.
Bayesian statistics: the parameters you want to infer are modeled as random variables with a non-empirical prior, and Bayes' rule is used to draw inferences.
Frequentist statistics: you construct estimators for the parameters you care about, subject to appropriate loss/risk criteria, but without any explicit "prior knowledge".
Frequentist statistics with Bayes' theorem: you use available empirical data, plus some exponential-family distribution, to construct an informed prior, then use Bayes' rule to update the prior on evidence. You use this Bayesian approach only for unobservable hypotheses, rather than for parameters which can be estimated.
Machine learning: you stack lots and lots of polynomial regressors onto each-other and train them with a loss function until they predict well on the validation set.
A more charitable take on machine learning: you decide that your criterion is predictive accuracy, and you evaluate it on a holdout set (or you cross-validate).
The idea of evaluation on a holdout set is actually frequentist: it's equivalent to "I really want my model to work well on the true distribution, but that's unknown, so I shall approximate it by the empirical distribution of the data." The empirical distribution is the maximum likelihood fit to the data, if you allow yourself the entire space of distributions.
Compare to how Bayesians do model selection... I've seen several versions:
-- "I have a prior on the set of models, and I compute the model evidence using Bayesian principles, and thereby update my beliefs about the set of models." (This is a clean principled approach. Shame no one does it!)
-- "I compute model evidence using Bayesian principles. The model with the largest evidence is my favoured model." (This is nonsense.)
-- "I compute model evidence. I then use gradient descent to find the hyperparameter values that maximize evidence." This is what is done by all sorts of "Bayesian" frameworks, such as the Gaussian Process models in sklearn. (This is classic frequentism, but for some strange reason Bayesians claim it as their own.)
I certainly wouldn't argue that "predictive accuracy" is the be-all and end-all of modelling -- but it is a nice clean principled approach to model selection. I have honestly never seen a Bayesian who takes a principled approach to model selection.
> A more charitable take on machine learning: you decide that your criterion is predictive accuracy, and you evaluate it on a holdout set (or you cross-validate).
I'm doing a PhD in machine learning, so I quite realize. But it's Bayesian machine learning!
Bayesian statistics is sometimes called subjectivist statistics. Probability in Bayesian statistics reflects your degree of belief in some potential outcome.
If you conduct an experiment, you use Bayes’ theorem to update your degree of belief, which is now conditional on the outcome of your experiment.
By quantifying your degree of belief in a prior, you give yourself some starting point (rather than just assuming 0 probability), even if that prior is only an educated guess and not some well researched position. This can be good because you might not have done the research yet.
Yeah, no thanks though. I don't want every rando adding "priors" that "feel" right to their analysis. Frequentist is straight forward. Both can (and are) abused to prove bias.
The difference between a frequentist and a Bayesian is that the latter admits that he picks a prior. A frequentist smushes together (1) the statistical assumptions (2) the approximations that make the problem computationally tractable and (3) the mathematical derivations, into one big mess. Just because you're not stating your assumptions doesn't mean there are none. Consider maximum likelihood estimation. It is not invariant under coordinate transformations. So which coordinates you pick is an assumption. In fact, with Bayesian estimation you can do the same thing: picking a prior is equivalent to picking the uniform prior in a different coordinate system. So frequentist estimation does involve picking a prior by picking a coordinate system, even if the frequentist does not admit this.
Frequentist methods are conceptually anything but straightforward. The advantage of frequentist methods is that they are computationally tractable. Usually they are best understood as approximations to Bayesian methods. For instance, MLE can be viewed as the variational approximation to Bayes where the family of probability distributions is the family of point masses, and the prior is uniform.
Indeed, it is the argmax of the likelihood, but the likelihood is not invariant under coordinate transformations. The quantity p(x)dx is invariant, not p(x). By picking a suitable coordinate transformation you can put the MLE on any value where the likelihood is not zero.
MLE is not invariant under parameter transformations because it's just the argmax of the likelihood!
Take for example x~normal and exp(x)~lognormal. The maximum of the distribution is at mu for the former and at exp(mu-sigma^2) for the latter, instead of exp(mu).
Adding to the other comments, you still have prior-dependence on a more subtle level, because it depends on what hypotheses are allowed.
Here's an extreme example. Consider flipping an apparently fair coin and getting "THHT". The hypothesis that the coin is fair gives this result with likelihood 1/16. The hypothesis that a worldwide government conspiracy has been formed with the sole purpose of ensuring this result... has a likelihood of 1.
But nobody would ever declare this the MLE, because "government conspiracy" isn't one of the allowed options. But it isn't precisely because it's unlikely, i.e. because of your prior. Of course this is an extreme example, but there are more innocuous prior-based assumptions baked in too.
Wait, in frequentist statistics getting, say, a p-value of 1 is not a bad thing--unless you erroneously assume that value is evidence for your null hypothesis.
Consider that if your data generating process really is a fair coin, then the conspiracy outcome you mention only occurs 1 our of 16 times, so 15 out of 16 times you observe a likelihood of 0. 15 out of 16 times your reject the conspiracy case.
There is also a tricky component here, because the notion of sample size is not clearly defined (can we generate multiple 4-tuples of flips, and consider each one a sample? Is your example really just a funky way of discussing type II power?)
> Wait, in frequentist statistics getting, say, a p-value of 1 is not a bad thing--unless you erroneously assume that value is evidence for your null hypothesis.
That's exactly what I'm saying. Suppose you get HHTHT. Then you run the following statistical test:
Hypothesis: a government conspiracy has been hatched to make you get HHTHT.
Null hypothesis: this is not the case.
The p-value is 1/32, so the null hypothesis is rejected.
This is bad reasoning for two reasons: first the alternative hypothesis is incredibly unlikely, and second the choice of alternative hypothesis has been rigged after seeing the data. These are exactly the two reasons so many social science studies running on frequentist stats have done terribly, and why we would benefit from Bayesian stats which force you to make these issues explicit.
> The p-value is 1/32, so the null hypothesis is rejected.
No, the p-value is defined as the likelihood of a result at least as extreme as the one we obtained, under the null hypothesis. It's not simply the likelihood of the particular result you obtained, as that would always be zero for continuous quantities! (Remember that the p-value's distribution is uniform over the 0-1 interval under the null, so any criticism that says the p-value is almost always small just by chance must be wrong somewhere).
So first you need to establish a way to say what result is how extreme. This is very often trivial and quite objective (the more people cured/made sick, the more extreme the effect of the drug). For the coin flip case, one way would be to call results with more imbalanced ratio more extreme. Then in your 3 heads out of 5 case, the (one sided) p-value would be the likelihood of getting 3, 4 or 5 heads out of 5. You can also come up with a different way to define what "more extreme" means (and put it forward in a convincing way), otherwise you can just not talk about p-values. You can keep talking about likelihoods, but not p-values.
> No, the p-value is defined as the likelihood of a result at least as extreme as the one we obtained, under the null hypothesis.
Define for me in an objective way what "at least as extreme" is. Let's say I think the string "HHTHT" is extremely indicative of conspiracy. Then the p-value is 1/32 on the measure of "strings of coin flips at least this extremely indicative of conspiracy".
See, this sounds completely ridiculous, but it's not in principle any different from what it done in thousands of social science papers a year. All these supposedly objective procedures have tons of ambiguity. For example:
> For the coin flip case, one way would be to call results with more imbalanced ratio more extreme.
Why an imbalanced total ratio? Why not average length of heads? Average number of occurrences of "HT"? Frequency of alternations between H and T? Average fraction of times H appears counting only even tosses? Given the combinatorial explosion of possible criteria, I guarantee you I can find a simple-sounding criterion on which any desired string of fair tosses gets a low p-value.
> Why an imbalanced total ratio? Why not average length of heads? Average number of occurrences of "HT"? Frequency of alternations between H and T? Average fraction of times H appears counting only even tosses? Given the combinatorial explosion of possible criteria, I guarantee you I can find a simple-sounding criterion on which any desired string of fair tosses gets a low p-value.
Sure you can p-hack and people definitely do it. Still, good papers argue for any unconventional choice of what they mean by extreme.
> Let's say I think the string "HHTHT" is extremely indicative of conspiracy.
Then I as your peer-reviewer will say I require more justification for your premise. Usually what counts as more extreme is not up to each paper to define, but depends on the conventions of a field that were agreed upon by domain-level reasoning, so you don't always have so many degrees of freedom left (but still have some, that's why p-hacking is a hot topic.)
Again, you're arguing against p-hacking: coming up with your criterion for what counts as extreme after looking at your observation.
Indeed if we assume no p-hacking, things look much nicer. If for some reason you've for years argued on YouTube that there's a conspiracy to make the 5 coin tosses that person X will perform on live TV on this and this date to be biased towards HHTHT, and then it actually does end up being HHTHT on live TV, then I think it's fair to say we can reject the null hypothesis at the level of p=1/32. It doesn't mean we absolutely for eternity have rejected it, but I guess it's worth accepting a paper about your analysis and discussion (taking the analogy back to science). We're already accepting a 5% false positive ratio anyway.
>Define for me in an objective way what "at least as extreme" is.
Come up with some one dimensional test statistic T whose distribution D you know under your null hypothesis. Define a one sided p value for data x as p(t <= x).
It sounds like your statistic is 0 if the sequence is always "HHTHT" and 1 otherwise? In this case your p value is 1 unless every attempt is "HHTHT" in which case it's zero, so the test statistic is 0 with probability 1/32^k for k attempts. The more attempts you do, the smaller p gets if the null is false. It's working as intended. For this test, a threshold of p=0.05 would be dumb, but it's always dumb.
It's not an awful test assuming you came up with your test statistic and "HHTHT" before collecting your data. It meshes with the intuition of betting your friend "Hey I bet if you flip this coin you'll get HHTHT." If they proceed to flip it and see HHTHT, they are reasonable to think maybe you know something they don't.
If you come up with your test statistic after the fact, there's theory around p hacking to formalize the intuition of why it's not convincing to watch your friend flip some sequence of coins and then tell them "dude, I totally knew it was going to be that" after the fact.
A more general method is to use the likelihood ratio, ie the ratio of the likelihood of an outcome under the alternative hypothesis to its likelihood under the null hypothesis. And then pick the outcomes which for which this ratio is highest as the ones which will cause you to reject the null hypothesis. Equivalently, the p-value is the probability under the null hypothesis that the likelihood ratio would be at least this large.
This works in the discrete case too, and gives p=1/32 in the original coin flip case.
Is the likelihood ratio test more general? I thought that one of the benefits of the usual NHST framework was that you only need the distribution of your stat under the null. With LRT don't you need the distribution under both the null and the alternative? How do you frame a null of mu = 0 against an alternative of mu != 0 with x ~ D_mu in this way?
You don't necessarily need the distribution under the alternative to determine the values for which the likelihood ratio will be highest. In your example, the tails will be the areas of maximum likelihood for any (symmetric) alternative.
It's strawman to always posit frequentists as unthinking blobs of meat who don't consider the credibility of the alternate hypothesis. In fact, many experimental scientists, physicists, biologists etc. made discoveries using frequentists techniques that didn't rely on boogyman notions of "want to bet the sun just burned out because you're in a closet" nonsense.
I'm a physicist that uses frequentist statistics, and it works fine. However, it can't be denied that some fields misuse it, though precisely the failure modes I pointed out.
What? Can you put in probabilistic terms what "this is not the case" is?
There are an infinite number of models where p(HHTHT | model) != 1, or where p(HHTHT | model) = 0. We need to know which one you're referring to, in order to calculate a p-value.
I think you have made a serious error by believing you can simply "reverse" the model p(HHTHT | conspiracy model) = 1, p(everything else | conspiracy model) = 0.
If the null hypothesis is a fair flip, then the alternative can't be a conspiracy, because the null and alternative need to be complementary statements. So if the null is fair flip, then the alternative is "not fair flip".
The exact point I am making is that all of this is totally up to the researcher. This is the standard methodology in social science: yes, in theory a low p-value does nothing but support the complement of a fairly bland null hypothesis. But in reality that's not what people do. Instead any low p-value is taken as proof of an extremely specific alternative hypothesis.
>The p-value is 1/32, so the null hypothesis is rejected.
This is incomplete. You need to define a test statistic and know its distribution under your null hypothesis before you can come up with a p value. What's your test statistic here and how is it distributed?
If you define your test after seeing the data, of course you can come up with an arbitrary p value. Choosing a distribution for your null to make it fit an agenda is just like choosing a distribution for your prior after seeing your data to make it fit an agenda.
You could say your prior is a delta function around HHTHT after observing it and get arbitrary evidence, but anyone reading your paper will find it unconvincing, just like anyone reading about a test statistic like this will find it unconvincing.
Your mistake here is in saying that because the p-value is 1/32 you reject the null hypothesis. You just decided to do that with utterly no justification. There is a problem with people unthinkingly deciding that a p-value of .05 is reasonable is most situations but that is not actually an issue with frequentist statistics anymore then people starting out with bizarre priors would be a problem with Bayesian statistics.
Not sure I follow? The hypothesis that the result you see is the result a worldwide government conspiracy is 100% supported by every result that you see. Because it is 100% consistent with the data, a statistical analysis will tell you exactly that--that it is 100% consistent with the data.
Again: Priors can and are used to mislead. Both methods can and are used to mislead. Just moving to Bayes doesn't assume the finding is free of bias all of a sudden.
It doesn't. But the workflow of Bayes forces you be explicit. If you try and cook the books, it will be shown for the world to see. Can you provide a paper that quoted a p value for a regression and also validated all the asymptotic conditions are close to being true in order for that p value to be even somewhat reliable?
If anything, Bayes increases complexity because of a variety of infinite priors that can be chosen. Frequentists is more straight forward because of the removal of this bias. A constant prior along with _actually reading the study/paper_ generally is sufficient. It doesn't preclude future testing. If anything, big discoveries in science require big scrutiny. Bayes does not add anything but complexity and another lever to tune in this regard.
Frequentist has a prior also though. The uniform distribution. In a sense this might be more biased as it doesn't always accurately describe the situation.
Either way I believe the effect of a prior diminishes greatly pretty quickly as you acquire more data. It's only a factor if you have extremely small set of data.
The uniform distribution ("flat") prior lets you interpret a maximum likelihood result as a maximum-a-posteriori (MAP) Bayesian point-estimate (implying a 0-or-1 loss function). One could argue that if you refrain from doing this and just stick to a literal application of the likelihood principle, you're not really depending on a flat prior.
For that matter, what is a "flat" prior over the parameters also depends on what parameterization you're using. Results that are 'intuitive' under one parameterization may not be under a different one.
Frequentist has a prior also though. The uniform distribution.
No. Experimental design affect frequentist conclusions in a way that is inconsistent with _ANY_ prior.
Here is a real life example. My aunt and uncle had 7 children. 6 boys and one girl. Were they biased towards having one gender over another? If the null hypothesis is that they aren't, the p-value that you get is easily calculated as 16/2^7 = 1/8 = 0.125. (There is 1 arrangement of 7 girls, 7 of 6 girls and a boy, 7 of 6 boys and a girl, and 1 of 7 boys for 16 equally likely arrangements.)
If I add the fact that they planned to have children until they had a boy and a girl, then that changes the p-value. In fact there are only 4 ways that their first 7 children can come out to give evidence this strong against the prior. So the p-value is now 4/2^7 = 1/32 = 0.03125.
However a Bayesian looks at this and says that no matter what prior you pick, the knowledge that they planned to have children until they had both a boy and a girl does not affect your posterior conclusion. It literally has nowhere to go in the formula and can't make a difference.
Therefore the frequentist's differing conclusions are not consistent with ANY prior, implicit or not.
> a Bayesian looks at this and says that no matter what prior you pick, the knowledge that they planned to have children until they had both a boy and a girl does not affect your posterior conclusion
A Bayesian would say no such thing. A Bayesian would agree that the knowledge that they planned to have children until they had both a boy and a girl doesn't affect your prior: you still are picking how much probability mass you allocate to all of the possible odds of having a boy vs. a girl, and the couple's plans don't affect that.
However, a Bayesian would also say that the knowledge that they planned to have children until they had both a boy and a girl significantly changes the likelihood ratio (or p-value, if you prefer to use that) associated with the observed data. And one of the advantages of Bayesianism is that it forces you to make that explicit as well.
Notice, for example, that when you calculated the first p-value of 1/8, you implicitly assumed that the couple's plan was "have 7 children, no matter what gender each of them is". The sample space is therefore all possible arrangements of 7 children by gender, and the p-value is 1/8, as you say.
But when you calculated the second p-value of 1/32, while you did change the count of arrangements, you failed to recognize that the sample space changed! Now the possibilities are not just all possible arrangements of 7 children (which is what you used), but all possible arrangements of up to 7 children (because the "stop condition" now is not when there are 7 children total, but when there is at least one child of each gender, and that could have happened at a number of children less than 7). So the correct p-value is not 4/2^7, but 4/(2^7 + 2^6 + 2^5 + 2^4 + 2^3 + 2^2) = 4/(2^8 - 2) = 2/127. A Bayesian, who has to calculate the p-value starting from the hypothesis, not the data, would not make that mistake.
And Bayesianism does something else too: it forces you to recognize that the p-value is not actually the answer to the question you were asking! By the p-value criterion, at least with the typical threshold of 0.05, the null hypothesis (that your aunt and uncle are not biased towards having one gender) is rejected. But a Bayesian recognizes that the prior probability of the gender ratio, based on abundant previous evidence, is strongly peaked around 50-50, much more strongly peaked than data with a bias equivalent to a p-value of 2/127 can overcome. So the Bayesian is quite ready to accept that your aunt and uncle had no actual bias towards having boys, they just happened to be one of the statistical outliers that are to be expected given the huge number of humans who have children.
> the sample space changed! Now the possibilities are not just all possible arrangements of 7 children, but all possible arrangements of up to 7 children [...]
> the "stop condition" now is not when there are 7 children total
Your answer makes no sense to me. If you consider the space of possibles combinations that can lead to having a boy and a girl, why do you stop at seven children. Why consider five boys and one girl but reject seven boys and one girl? Both of them are end cases that could be reached.
Yes, I was posting in a rush and was being sloppy. Here's a more detailed calculation.
The process involved is that the couple continues to have children until they have at least one of each gender. If we assume that at each birth there is a probability p of having a boy (as I noted in my response to btilly elsewhere, the Bayesian prior would actually be a distribution for p, not a point value, but I'll ignore that here for simplicity), then the process can be modeled as a branching tree something like this:
Child #1:
boy -> p;
girl -> 1 - p
Child #2:
boy - boy -> p^2;
boy - girl -> p(1 - p) : STOP;
girl - boy -> (1 - p)p : STOP;
girl - girl -> (1 - p)^2
So we have a probability of 2p(1 - p) of stopping at child 2.
Child #3:
boy - boy - boy -> p^3;
boy - boy - girl -> p^2(1 - p) : STOP;
girl - girl - boy -> p(1 - p)^2 : STOP;
girl - girl - girl -> (1 - p)^3
So we have a probability of [1 - 2p(1 - p)] [p^2(1 - p) + p(1 - p)^2] of stopping at child 3 (the first factor comes from the probability that we didn't stop at child 2 above).
By a similar process we can carry out the tree for as many children as we want. For the case p = 1/2, which was the case I was considering, all of these expressions for the probability of stopping at child #N (for N > 1) simplify to 1 / 2^(N - 1). So the probability of stopping at or before child #N is the sum of those probabilities from 2 to N; for N = 7 that is 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 = 63/64. That is close enough to 1 that I ignored cases with more than 7 children; but for a more exact calculation you could add an extra 1/64 to the denominator used to calculate the likelihood (or p-value) of the specific case that was actually observed, to allow for the cases with more than 7 children.
In Bayes' formula, the absolute probability of the observed outcome does not matter. What matters is the ratio of the observed outcome for a given p to the probability under your prior.
The structure of what might have happened does not affect those ratios. Only what was observed does.
> The structure of what might have happened does not affect those ratios. Only what was observed does.
This is true in the sense that you only compute conditional probabilities for the data that was actually observed, not data that could have been observed but wasn't.
However, there's more to it than that; I'll post upthread in a moment with more detail.
> a Bayesian would also say that the knowledge that they planned to have children until they had both a boy and a girl significantly changes the likelihood ratio (or p-value, if you prefer to use that) associated with the observed data.
Actually, on going back and reviewing Jaynes' Probability Theory (section 6.9.1 in particular), I was wrong here, because in this particular case, the parents' choice of process does not affect the likelihood ratio for the data. So btilly was correct that Bayesian reasoning gives thes same posterior distribution for p (the probability of a given child being a boy) for the two data sets. However, in fact, this is not a problem with Bayesian reasoning: it's a problem with frequentist reasoning! In other words, the frequentist argument that the change in the parents' choice of process does affect the inferences we can validly draw from the data, because the p-value changed, is wrong. The Bayesian viewpoint is the one that gives the correct answer.
Here is an argument for why. The underlying assumption in all of our discussion is that, whatever the value of p is, it is the same for all births: in other words, any given birth is independent of all the others in terms of the chance of the child being a boy. And that assumption, all by itself, is enough to show that the parent's choice of process does not matter as far as inferences from identical outcome data is concerned: it can't matter, because the parents' choice of process does not affect p, i.e., it does not affect the underlying fact that each birth is independent of all the others. And as long as each birth is independent of all the others, then the only relevant properties of the data are the total number of children and the number of boys. Nothing else matters. In particular, the p-value, which requires you to look, not just at the relative proportion of boys and girls in the data, but at how "extreme" that proportion is in the overall sample space (since the p-value is the probability that a result "at least that extreme" could be obtained by chance), does not matter.
Here is another way of looking at it. We are analyzing the same data in two different ways based on two different processes for the parents to decide when they will stop having children. This is equivalent to analyzing two different couples, each of whom uses one of the two processes, and whose data is the same (they both have, in order, six boys and one girl). The claim that the different p-values are relevant is then equivalent to the claim that the data from the two couples is being drawn from different underlying distributions. However, these "distributions" are only meaningful if they correspond to something that is actually relevant to the hypothesis being tested. In this case, that would mean that the couple's intentions regarding how they will decide when to stop having children would have to somehow affect p, since the hypothesis we are testing is a hypothesis about p. But they don't. So the two couples are not part of different distributions in any sense that actually matters for this problem, and hence the different p-values we calculate on the basis of those different distributions should not affect how we weigh the data.
In fact, we can even turn this around. Suppose we decide to test the hypothesis that the parents' choice of process does affect p. How would we do that? Well, we would look at couples who were using different processes, and compare the data they produce, expecting to find variation in the data that correlates to the variation in the process. But in this case, the data is the same for two different choices of process--which means that the data is actually evidence against the hypothesis that the choice of process affects p!
Note that this is not a general claim that other information never matters. It is only a specific claim that, in this particular case, other information doesn't matter. It doesn't matter in this case because of the independence property I described above--the fact that every birth is an independent event with the same value of p, unaffected by the variable that differs between the couples (the choice of process). In hypothetical scenarios where the births were not independent, then other information would be relevant; for example, we might want to consider a hypothesis that the age of the parents affected p. A Bayesian would model this by not treating p as a single variable with some assumed prior distribution, but as a function of other variables, which would need to be present in the data (for example, we would have to record the ages of the parents).
How does all this square with the fact that the total sample space certainly does change if the parents' choice of process changes? In the simple case where the process is "have 7 children", every possible outcome is equally likely, so the probability of any single outcome is just 1 / the total number of outcomes. In the case where the process is "have children until there is at least one of each gender", then the outcomes are not all equally likely; the particular outcome that was observed has the same probability as it would under the first process (so btilly is correct about that), but other outcomes have different probabilities. However, as long as each birth is independent, none of those other probabilities affect the inferences we are justified in drawing from the data; only the probability of the actually observed outcome does. (Strictly speaking, as btilly pointed out downthread, it is not the absolute probability that matters but the likelihood ratio; but the likelihood ratio in this case is just the ratio of P(data|p, prior) to P(data|prior), and P(data|prior) is also the same for both data sets since we are assuming the prior for p is independent of the process used to generate the data sets.)
> a Bayesian looks at this and says that no matter what prior you pick, the knowledge that they planned to have children until they had both a boy and a girl does not affect your posterior conclusion
A Bayesian would say no such thing...
Actually they would if they understood the formula. Bayes' formula has no place to put for things that could have been observed had things turned out differently, but which didn't actually happen. Therefore mighta, woulda, coulda but didn't cannot affect your conclusions. Ever.
However, a Bayesian would also say that the knowledge that they planned to have children until they had both a boy and a girl significantly changes the likelihood ratio (or p-value, if you prefer to use that) associated with the observed data. And one of the advantages of Bayesianism is that it forces you to make that explicit as well.
I am not sure how you think that the calculation should be carried out. But it certainly shouldn't be done the way that you describe.
If your prior was that a fraction p of the children would be boys, the odds of the observed outcome would be p^6 * (1-p). It is that regardless of which version of the experiment you run. The conditional probability the outcome being around p given the data is the odds in your prior of the probability being around p, divided by the a priori odds of the observed outcome, 6 boys and then a girl. The calculation is the same in both versions of the experiment and therefore the conclusion is as well.
And Bayesianism does something else too: it forces you to recognize that the p-value is not actually the answer to the question you were asking! By the p-value criterion, at least with the typical threshold of 0.05, the null hypothesis (that your aunt and uncle are not biased towards having one gender) is rejected. But a Bayesian recognizes that the prior probability of the gender ratio, based on abundant previous evidence, is strongly peaked around 50-50, much more strongly peaked than data with a bias equivalent to a p-value of 2/127 can overcome. So the Bayesian is quite ready to accept that your aunt and uncle had no actual bias towards having boys, they just happened to be one of the statistical outliers that are to be expected given the huge number of humans who have children.
Actually a Bayesian with access to actual population data would be aware, as you aren't, that globally we average 1.07 boys to each girl at birth. Therefore most couples, likely including my aunt and uncle, were probably biased towards having boys.
There is a good deal of coincidence involved in my actually having the setup for a classic criticism of frequentism in a close relative. But if it happened, the odds were in favor of it involving 6 boys and a girl rather than the other way around.
> Bayes' formula has no place to put for things that could have been observed had things turned out differently, but which didn't actually happen.
Sure it does: you have to calculate the probability of your data given the hypothesis. Doing that requires considering all possible outcomes of the hypothesis and their relative likelihood, not just the one you actually observed.
> If your prior was that a fraction p of the children would be boys, the odds of the observed outcome would be p^6 (1-p).*
The prior would not actually be a single value for p; it would be a distribution for p over the range (0, 1). The distribution I described was a narrowly peaked Gaussian around p = 0.5, though, as you point out, that might not be the correct value for the peak (see below). However, for illustration purposes, it is much easier to talk about the (idealized, unrealistic) case where your prior is in fact a single point value for p.
However, in order to calculate the odds of the observed outcome, as I said above, you don't just need to know the prior for p. You need to know the process by which the outcomes are generated, according to the hypothesis. The odds you give assume that that process is "bear seven children, regardless of their gender". But that is not the correct process for the actual decision procedure you describe your aunt and uncle as using. That process won't necessarily result in seven children, and the odds of the actually observed outcome will change accordingly.
> a Bayesian with access to actual population data would be aware, as you aren't, that globally we average 1.07 boys to each girl at birth
Depends on whose data you look at and over what time period. But I agree that the best prior to use in a given case would be whatever distribution you get from the data you already have, and yes, that might not be peaked exactly at 50-50.
"globally we average 1.07 boys to each girl at birth. Therefore most couples, likely including my aunt and uncle, were probably biased towards having boys."
> Most couples are biased towards boys over girls.
Yes, but the Bayesian argument shows that you can't infer that from your one sample. You only know that there is a bias towards boys because you have the global data that allows you to adjust the Bayesian prior to be peaked around the actual observed ratio instead of around 0.5. The Bayesian prior is still a much better prediction for any other case not yet observed than any value different from the prior that you might calculate from the data from just your aunt and uncle.
Let's say you perform a maximum-likelihood estimate, you still have an assumption baked in, that maximizing the likelihood given the data is the right way to make your estimate.
In fact, it's very interesting to reconstruct a Bayesian prior for a maximum likelihood estimate. For example when you calculate probabilities for a binary event, 10 head flips, 8 tail flips. The ML estimate is 8/18 = 4/9. A Beta-Binomial bayesian model leads to a posterior distribution of Beta(a=8, b=10) with a mean of (8+a0)/(10 + 8 + a0 + b0), with a0 and b0 representing the prior distribution Beta(a0, b0). Now you can see that the maximum likelihood estimate is identical in this case to assuming a bayesian Prior of Beta(0, 0).
I am not saying by this that frequentism is Bayesian inference in disguise, rather, you cannot escape the assumptions.
Also, frequentism is not that straightforward, there are many kinds of frequentist estimators and it can be complicated to choose among them.
The ML estimate is a posterior mode, assuming a flat prior. It's not immediately clear that it will always be possible to find a corresponding posterior mean. (From a Bayesian point of view, this is a difference in loss functions as opposed to priors over the parameters. With a posterior mean, you're making the optimal inference assuming a quadratic loss; a posterior mode is appropriate for a 0-or-1 loss.)
That's not the point. The point is there is no choice between having priors and not. There is only the choice between acknowledging priors versus doublethink, confusion, and deception.
With how fashionable it is to talk about implicit bias, I wonder how those concerns intersect with the people attacking Bayesian approaches here.
If the goal is to avoid bankruptcy, then the probability needs to be interpreted differently. If you bet the house every time, you're guaranteed to go bankrupt eventually. Suppose instead you bet half your money on an event of 50% probability. If you take 1:1 odds on this, then when you lose, your money is divided by 2, but when you win it is only multiplied by 1.5. Your money will tend to decrease over time. You need to pick odds 1:a such that 1+a/2=2 => a=2.
We recover our regular betting odds by betting a smaller portion of our money. If we bet a portion 1/d of our money on an event of probability 1/p, we needs odds 1:a such that 1+a/d=(d/(d-1))^(p-1). For large enough d we get a=p-1, as we would expect.
Assume again you're betting half your money each round, but take a probability of winning of 84%, as in the article. You should take that bet at 1:1.14 odds, much less than the recommended 1:5 odds.
This has nothing to do with interpreting probability, but with a utility function that's not linear in terms of wealth. With decreasing marginal returns to wealth, the same bet becomes less attractive at lower wealth levels.
In robotics, particularly in bayesian filtering (KFs and so on), I find the idea of a "prior" solid and I don't see any frequentist alternatives. Your prior is easy to understand: whatever your posterior for your state was at the previous timestep, updated using the actions you wanted your robot to accomplish. Inference is then refining this prior using the observation that the robot makes.
There's nothing hand-wavy about that; if you do bayesian statistics with bad priors of course you're going to get bad inference. I guess the author just warns about being careful about your assumption which is always good.
What do you do when you wake up? Assume you're in the same place as when you went to sleep. You won't be surprised to find yourself on the other side of the bed - slightly more surprised to find yourself on the floor, and very surprised to find yourself in another country. A large belief update is always a bit of a shock.
You initialize it to something. Like if you have a robot for running around in a specific building or warehouse, you can initialize the location distribution to be uniform within the warehouse.
After a few high-quality observations, the effect of the exact choice of initialization should be greatly reduced anyway.
> Example abridged: a draw from N(phi,1) for unknown phi is 1. Bayesian reasoning with a uniform prior gives an 86% posterior probability that phi > 0
I'm not sure I see the problem here? If it's counterintuitive, it's only because we treat N(0,1) as the normal distribution, so our true prior is that if we pick a distribution out of a hat we're more likely to have N(0,1) than anything else.
Suppose I truly know nothing but what is given in the quote. On the basis of symmetry, I'd have to conclude that P(phi<0) is the same as P(phi>2). If the blogger had phrased this as "86% posterior probability that phi < 2", I don't think it would be so surprising.
In fact, the blogger describes this draw as:
> after seeing an observation that is statistically indistinguishable from noise.
which to me presupposes a great deal of information about what 'noise' is supposed to look like.
I don't know, this seems to be a really low-effort blog post. The given example is obviously contrived from the unreasonable improper (-\infty,\infty) prior and the low \sigma^2=1 likelihood. If it was really "pure noise" then you'd have \sigma^2=\infty which rightly gives you a flat posterior.
For sure Bayesian gives you more flexibility with your assumptions, so it's easier to shoot yourself in the foot. But when used correctly it can be more powerful, and often easier to interpret.
Ironically the article that the example is from offers quite a nice rebuttal:
> None of these examples are meant to shoot down Bayes. Indeed, if posterior inferences don’t make sense, that’s another way of saying that we have external (prior) information that was not included in the model. (“Doesn’t make sense” implies some source of knowledge about which claims make sense and which don’t.) When things don’t make sense, it’s time to improve the model. Bayes is cool with that.
There is a certain intellectual laziness in this perspective as might be expected from a short blog post - obviously Bayes' formula is theoretically sound because it is trivial to deduce and prove.
So we know that if the conclusion is not acceptable then either the method, the prior or the evidence is not acceptable. Evidence and method can be ruled out; so the prior was not reasonable.
Basically, he's saying that he doesn't believe the prior is flat. A reasonable thing to say too - as he says practically speaking if we suspect the distribution is probably random noise then the prior is we are probably looking at noise. So in practice the prior is heavily weighted towards 0. It isn't intellectually honest to use an uniformed prior unless you think the probability of a process being statistical noise is almost 0.
>obviously Bayes' formula is theoretically sound because it is trivial to deduce and prove.
Quantum mechanics doesn't follow the usual probability rules, so you can't really say "obviously Bayes' formula is theoretically sound." It certainly seems like Bayes theorem should apply universally but apparently it doesn't. Or at least, the jury's still out.
But this isn't actually a criticism of Bayes at all. Yes, the result depends on your prior. But the result always depends on your preconceptions -- even in frequentist statistics, where it determines which statistical tests you use and which hypotheses you test and what p-value cutoff is reasonable. It's better to have this up front.
Or, you can publish Bayesian update factors, which are prior-independent.
The example should of course ring caution bells but at least in Bayes you can figure out why your inference is doing unreasonable things by examining each of your assumptions. In this case it’s the prior that needs fixing.
Are there alternative methods that are better than the Bayes method for this toy example?
Just for context, Andrew Gelman is one of the creators of Stan[1], one of the most popular probabilistic programming platforms for Bayesian interference. He has written a popular textbook on Bayesian methods, Bayesian Data Analysis[2].
Everyone hates picking priors in Bayesian analysis. If you pick an informative prior, you can always be criticized for it (in peer review, for a business decision, etc.) The usual dodge is to use a non-informative prior (like the Jeffreys prior[3].) I interpret Gelman's point as saying this can also lead to bad decisions. Thus, Bayesian analysts must thread the needle between Scyllia and Charybdis when picking priors. That's certainly a real pain point when using Bayesian methods.
However, it's pretty much the same pain point as choosing regularization parameters (or choosing not to use regularization) when doing frequentist statistics. For example, sklearn was recently criticized for turing on L2 regularization by default which could be viewed as a violation of the principle of least surprise, as well as causing practical problems when inputs are not standardized. But leaving regularization turned off is equivalent to choosing an non-informative or even improper prior. (informally in many cases, and formally identical for linear regression with normally distributed errors[4].) So Scyllia and Charybdis still loom on either side.
My problem with Bayesian models, completely unrelated to Gelman's criticism, is that the partition function is usually intractable and really only amenable to probabilistic methods (MCMC with NUTS[5], for example.) This makes them computationally expensive to fit, and this in turn makes them suitable for (relatively) small data sets. But using a lot more data is the single best way to allow a model to get more accurate while avoiding over-fitting! That is why I live with the following contradiction: 1) I believe Bayesian models have better theoretical foundations, and 2) I almost always use non-Bayesian methods for practical problems.
Bayesian modeling can be very powerful when it works but it can also be catastrophic when it fails. It helps to think about this in an adversarial decision theoretic context where you play a prediction game against an opponent (usually called Nature).
We can think of the game as discovering the best model to explain a set of observations. The Bayesian believes that Nature picks the true model that generated the observations by sampling the prior. This is actually a huge assumption to make, which is why Bayesian methods work so well when the assumption is close to the truth.
Frequentists make the assumption that Nature chooses the underlying true model from a set of possible models. Beyond restricting the set of models Nature can choose from, frequentists make no further assumptions about the selection process. This is a strictly weaker assumption than the Bayesian makes, which means frequentist methods will do better when the specified prior grossly misrepresents Nature's decision making process.
There are even weaker assumptions that can be made about how Nature chooses the data. Regret-based model inference allows for a more adversarial game with Nature where the data may not come from the class of models considered at all. If Nature truly behaves this way, then Bayesian decision making can catastrophically fail.
This ignores the main strength of a Bayesian workflow. You can straight forwardly quantify the effect of your prior choice on your inference - pick a different prior; how much does that change the inference, etc etc. A good Bayesian workflow does not assume a prior to be true; it should be based on available evidence, and then stressed. To be a bit more concrete, let's say we wish to model the height of kangaroos. We come up with a model form, say regression, and a bunch of potential features. If we are Bayesian we might say; "I think nature prefers simple stable solutions, so I'll put a N(0,d) prior on my weights. We then compute a posterior and get a range of credible values. We can then say, "hey, what if I'm wrong and actually it's a student t, or it's flat prior or X or y or z", and use principled tools like marginal likelihood to say which family of models works best, do prior posterior comparisons to see how observations changed our prior etc etc.
If we do this under a frequentist framework we compute the regression coefficients, and can get some confidence bounds with some appeal to asymptotics (and nobody I've ever seen actually makes any attempt to validate these assumptions). And even when we are done, we get a confidence interval that has such a truly unintuitive definition that almost every person who is not a stats PhD fundamentally misinterprets.
To say frequentists make less assumptions is not true- they are just less explicit, and I consider it a strength not a weakness to highlight choices made by the statistician.
As a thought experiment, could you choose priors by setting the derivative of the solution with respect to the priors equal to zero? This would be the case of minimal sensitivity.
My problem with the 'Bayes=rationality' type of argument is that it ignores context and isn't really a case for reporting Bayesian vs frequentist estimates. If I am a researcher publishing results then I have an audience who interpret my results. If my audience is Bayesian and accept my model then all I need to do is report sufficient statistics and they can make their own Bayesian inferences given their priors, or better yet, I can just post my whole dataset. The very reason we need to report things like credible sets or confidence intervals rather than just sufficient statistics is because audiences in the real world want summary stats that they can easily interpret and are transparent. The best approach to inference is one that is the most useful to audiences, and that depends on context and practicalities rather than on some underlying philosophy of subjective vs objective probabilities.
Many analyses of the world aren't bayesian /or/ frequentist, they use much simpler pattern-matching, with feedback loops that update the approach used as well as the conclusion. Problems start w/ assuming you have to choose one of those approaches to estimate the future...
>Put a flat prior on theta and you end up with an 84% posterior probability that theta is greater than 0. Step back a bit, and it’s saying that you’ll offer 5-to-1 odds that theta>0 after seeing an observation that is statistically indistinguishable from noise. That can’t make sense. Go around offering 5:1 bets based on pure noise and you’ll go bankrupt real fast.
If you think it's likely to be pure noise, why the hell would you put a flat prior on it?
Note also that nonflat priors are implicit in significance testing - e.g. p95 significance is similar to putting a 95% prior on the null hypothesis, and p99 significance is similar to putting a 99% prior on the null hypothesis.
The author has a major fundamental misconception as to how probability works. If I say "the probability that proposition X is true is 0.5", that means that based on the information available to me right now it's equally likely likely to be true as false. That's not even remotely similar to saying I would offer an even money bet.
Ignoring the fact that “the author” is one of the most respected statistians in the world today... there is no debate on how to translate probabilities into odds:
odds(x) = p(x)/(1-p(x))
Thats the definition of “odds”. so in this case it is quite clear that the odds for X is 1, implying and even money bet.
The human brain is the best Bayesian model builder that evolution has yet devised. A good place to start assessing its weaknesses is to observe your own brain messing up. This shouldn't be hard to do.
I bring out a coin; I tell you nothing, and ask you to guess what the probability of heads is. What do you guess?
Unless you have reason to believe I am trying to deceive you, it will be able 50% because you have a lot of knowledge from other contexts that tells you this is true.
The arrow is probably the other way round than you state -the brain probably isn't Bayesian; being Bayesian is modelled on how humans process and contextualise decisions.
I'm not even sure how a frequentist would construct a model to estimate an outcome with no observations.
> I'm not even sure how a frequentist would construct a model to estimate an outcome with no observations.
The same way a Bayesian would, since it's a question about probabilities of hypothetical experiments, not about statistics. Or you go through decision theory instead of mucking about with half baked ideologies.
> being Bayesian is modelled on how humans process and contextualise decisions.
"Bayesian" is an overloaded term. There's Bayes' theorem/rule, which basically everyone agrees with, since it's a theorem that's very simple to prove with a few high school math operations.
Then there is the philosophical Bayesian interpretation of probability, that claims that probabilities are fundamentally about our own mental state of belief, as opposed to frequencies at the limit of infinite repetition of some experiment.
Then there is the Bayesian methods of statistics / machine learning etc, which are about handling parameters as random variables and the observed data as fixed, as opposed to assuming that there's one fixed parameter (without a distribution to talk about) and the data should be modeled as random (from an oversimplified bird's eye view). And it was also oversold as a miracle cure for all our problems: for some time, before the deep learning era, you just had to have "Bayesian" in your ML paper title to make it sexy and interesting.
Then there is the online Bayesian rationalist community, where Bayes is used to explain the meaning of life, the universe, it's the great grand explanation of everything, a self help tool, the key to seeing the light, a semi-religious experience, the way to enlightenment (they even call it the Way, capitalized - I guess a Buddhist reference?). As if being Bayesian was this secret club, that sets you apart from average people, a symbol of belonging to the in-group etc. [1]
It's important to keep these apart.
[1] For example: https://youtu.be/NEqHML98RgU?t=73 (it's explicitly not about the math but about self-help and intuition to benefit our lives etc...)
Perhaps my experience is limited, but every (supposedly non-Bayesian) model I've used in practice has been possible to re-express using Bayesian terms, priors and beliefs and so on. Then I get to look at the intitial assumptions (model/prior) and use suitable human hand-wavey judgement about whether they make sense.
Bayes is a good way to _update_ models, but if you lose sight of the fact that the bottom of your chain of deduction was a hand-wavey guess, you're in trouble.