Hacker News new | past | comments | ask | show | jobs | submit login

> I did fake Bayesian math with some plausible numbers

This is actually a great example of the deepest and most intractable issue with the Bayesian viewpoint: different observers with different priors can come to radically different conclusions, even if both did the math correctly. One can easily compose priors under which the jump in P(Lab-Leak) is much greater than the 20 - 27.5% jump Scott mentioned. And there's no principled way to argue that one prior is better than another, especially in cases like this, where there's no good reference class.

(update: I think this is a great essay and I strongly agree with the overall point)




> different observers with different priors can come to radically different conclusions

One may view this as a feature, not a bug! :-) To quote Jaynes, "there is no unique notion of ignorance". And to further quote Carl Sagan, "extraordinary claims require extraordinary evidence".

Combining these two statements, if Alice expects an experiment to succeed and Bob expects it to fail, each very well MAY come to different conclusions after observing the result, depending on the depth of their prior belief.

This is completely normal in how everyday science is carried out. Even informally... how many researchers had grants denied funding because the funding agency had a prior belief that the subject of the grant had a foregone conclusion? The opinions (the prior) of the grant panel differ markedly from the grant applicant, and each would require correspondingly different levels of proof from any subsequent experiment!


So what makes 'Bayesian math' meaningful to anyone who is not Alice or Bob?

It just seems like random speculations to any outside observer. Whether they are claimed to be caused by a coincidental pattern of neurons firing, 'math', etc., seems irrelevant.


A lot of the time you could say that it's structured handwaving, but the structure gives you a number of benefits over regular handwaving ;)

Pretty quickly we can tell if we disagree about the whole framing of something (the overall model) or our priors are just very different.

The explicitness of priors is a big feature and when they're wildly different we can discuss / present evidence for those.

We can also see how much our priors matter in terms of outcome, i.e, what are the "load bearing beliefs" that lead us to make a certain prediction.


How does this lead to something that will convince a random third party, outside of Alice and Bob, in a way that can be uniquely identified as 'Bayesian math'?


How random is the third party? What epistemic frameworks do they find acceptable? (sorry if I am not getting what you are asking).

A theological argument (no matter how good) is unlikely to convince an atheist of anything.

In a perfect world if we agree that bayesian epistemology makes sense and I accept your priors and your model then I should be willing to accept your conclusions. But because of bounded rationality and other factors like https://slatestarcodex.com/2019/06/03/repost-epistemic-learn... then I might still refuse to do so if your conclusions are ludicrous or repugnant to me :)


> How random is the third party? What epistemic frameworks do they find acceptable?

Any of the 8 billion people, who are not Alice and Bob, passing by... who, largely, don't even care about the concept of 'epistemic frameworks' in the first place.

Writing an explanation that only is convincing if the reader already believes in 'Bayesian Math' in the first place seems redundant, hence me pointing it out.


Right, but I don't feel this is a problem with bayesian reasoning.

If you want to _convince_ people you are in the domain of psychology, focus groups, marketing, social engineering etc.

An argument being convincing is not the same as an argument being valid.


Can you write down this logically valid argument, including its axioms?


Is this a good faith question? Or is it intended as a "gotcha"? I'm fine to spend a few minutes if you find this sort of thing fun.

I think I could, given the universe of people P, a set of belief frameworks E and a mapping B(belief) from P -> set(E). I would then try to say that my prior about B is that there's no E shared by all P.

Put another way, I think it's very likely there's nothing you can say that will convince everyone.

This would be an overly simplified model of reality because merely stating a valid argument in framework E held by some P is not sufficient to "convince" them. Also "epistemological frameworks" don't really exist, they're just a shorthand we can use to describe clusters of beliefs that often go together. Lots of other things about the argument matter, like how complex it is (reduces trust) or who is presenting it. Also, a given argument might be "valid" in multiple belief systems at once. Maybe we could add in the idea that a particular statement of an argument (call it A) which means different things in E1 and E2 is really two different arguments A1 and A2. How far you want to go with modelling is a matter of taste.

Do we disagree? Where? Maybe we can find a mutually acceptable model if you are interested.

If your overall point is that often people write complete garbage in the shape of a formal argument and then act like they've "proven" something - I 100% agree with you. We can't "mathematically prove" non-trivial things about reality but we can construct models and use tools to reason about them. I do believe that the activity of stating your model and then using tools can be helpful.


It is a good faith question if the argument actually exists, which you seemed confident in.

I'm not saying you have to write it down, just that the previous comments won't be persuasive if it's not.


It's only a problem if you want to use Bayes to take some credences + evidence and prove them irrational (indep. of prior).

Bayes is great for verifying self-consistency: given some priors and some evidence, it produces exactly one set of credences. If you've somehow got a different set, you've gone wrong somewhere (and can be Dutch-booked).

What it won't give you is a full theory of rationality--but IMO this is not a problem with Bayes in particular. No theory will. There /must/ always be some free variable that prevents landing at exactly one set of credences. All theories that disagree come with very strange (and not very believable) implications.


Eh, I wouldn't call it an issue with Bayesian math itself. Just... it's an issue with reality.

At least with bayesian math you can make your priors explicit, so we the reader at least have the possibility of disagreeing with it.

If you really had no idea you can say it's 50/50 chance and plug the numbers.


How does frequentist deal with the same problem (a possible lab leak of virus)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: