Hacker News new | comments | show | ask | jobs | submit login

It's not even necessary for the topics to be politically controversial (by which I mean Democrats and Republicans have opinions on it) for this effect to occur. You can get the same filtering effect purely via internal pressures.

As examples, consider the Copenhagen consensus in foundations of quantum mechanics or the (currently being overturned, thanks to machine learning) Frequentist consensus in statistics.

About the statistics thing: I thought that Frequentism and Bayesianism were two separate frameworks to use in two separate contexts? Basically, I thought that you used Frequentism when you could do a definite experiment that yields an exhaustive probability distribution (like rolling a die to see how often it yields each number), while you use Bayesianism to evaluate accumulations of evidence about a distribution you can't experiment on directly. Is that wrong?

That's not correct. Frequentists view the world as an experiment measuring a set of unknown parameters describing a probability distribution. These parameters are fixed, and cannot be described probabilistically.

In contrast, Bayesians use probability to represent uncertainty.

I.e., to a Frequentist, it doesn't even make sense to ask the question "what is the probability that the coin is rigged", whereas the Bayesian would come up with a probability distribution for the probability of the coin coming up heads.

Largely, though it's a matter of terminology[0]. I doubt that any but the most dogmatic frequentist would say there isn't something to the notion behind the question you're asking "what do you think is the chance the coin is rigged", but rather that it should not be conflated (by using the term "probability") with the notion of a situation where something can be repeated and measured - or at least one that sufficiently approximates that situation.

[0] I find that battles over terminology can be the harshest and most recriminating minefields among the intelligent.

Ok, so frequentists believe P(x) for some event x is the limit of the fraction of trials in which x happens, as we increase the number of trials to infinity.

Whereas Bayesians believe P(x) is odds at which they would gamble, so to speak, that a particular proposition x is true.

Is that it?

It's deeper than that. The difference in views influences everything you compute. For example, Frequentists compute p-values, which is P(reject null hypothesis | null hypothesis is true) [1].

In contrast, Bayesians compute P(null hypothesis is false | prior knowledge).

Similarly, Frequentists compute confidence intervals (say at 5%), which is an interval that represents the set of null hypothesis you can't reject with a 5% p-value cutoff. In contrast, Bayesians compute credible intervals, which represent a region having a 95% probability of containing the true value.

Personally, I'm solidly in the Bayesian camp simply because I can actually understand it. To take an example, consider Bem's "Feeling the Future" paper [2] which suggests that psychic powers exist. From a Bayesian perspective, I understand exactly how to interpret this - my prior suggests psychic powers are unlikely, and my posterior after reading Bem's paper is only a little different from my prior. I don't know how to interpret his paper from a Frequentist perspective.

[1] http://www.bayesianwitch.com/blog/2013/godexplainspvalues.ht...

[2] http://www.dbem.ws/FeelingFuture.pdf For background, his statistical methods were fairly good, and more or less the standard of psychology research. If you reject his paper on methodological grounds, you need to reject almost everything.

That's why i said "cancer research" at the end. But the additional problem is that if it's politically controversial, by and large most research is paid for at the leisure of politicians, so you wind up with another layer of circular pressure.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact