Hacker News new | past | comments | ask | show | jobs | submit login
Against learning from dramatic events (astralcodexten.com)
53 points by feross 8 months ago | hide | past | favorite | 32 comments



> I did fake Bayesian math with some plausible numbers

This is actually a great example of the deepest and most intractable issue with the Bayesian viewpoint: different observers with different priors can come to radically different conclusions, even if both did the math correctly. One can easily compose priors under which the jump in P(Lab-Leak) is much greater than the 20 - 27.5% jump Scott mentioned. And there's no principled way to argue that one prior is better than another, especially in cases like this, where there's no good reference class.

(update: I think this is a great essay and I strongly agree with the overall point)


> different observers with different priors can come to radically different conclusions

One may view this as a feature, not a bug! :-) To quote Jaynes, "there is no unique notion of ignorance". And to further quote Carl Sagan, "extraordinary claims require extraordinary evidence".

Combining these two statements, if Alice expects an experiment to succeed and Bob expects it to fail, each very well MAY come to different conclusions after observing the result, depending on the depth of their prior belief.

This is completely normal in how everyday science is carried out. Even informally... how many researchers had grants denied funding because the funding agency had a prior belief that the subject of the grant had a foregone conclusion? The opinions (the prior) of the grant panel differ markedly from the grant applicant, and each would require correspondingly different levels of proof from any subsequent experiment!


So what makes 'Bayesian math' meaningful to anyone who is not Alice or Bob?

It just seems like random speculations to any outside observer. Whether they are claimed to be caused by a coincidental pattern of neurons firing, 'math', etc., seems irrelevant.


A lot of the time you could say that it's structured handwaving, but the structure gives you a number of benefits over regular handwaving ;)

Pretty quickly we can tell if we disagree about the whole framing of something (the overall model) or our priors are just very different.

The explicitness of priors is a big feature and when they're wildly different we can discuss / present evidence for those.

We can also see how much our priors matter in terms of outcome, i.e, what are the "load bearing beliefs" that lead us to make a certain prediction.


How does this lead to something that will convince a random third party, outside of Alice and Bob, in a way that can be uniquely identified as 'Bayesian math'?


How random is the third party? What epistemic frameworks do they find acceptable? (sorry if I am not getting what you are asking).

A theological argument (no matter how good) is unlikely to convince an atheist of anything.

In a perfect world if we agree that bayesian epistemology makes sense and I accept your priors and your model then I should be willing to accept your conclusions. But because of bounded rationality and other factors like https://slatestarcodex.com/2019/06/03/repost-epistemic-learn... then I might still refuse to do so if your conclusions are ludicrous or repugnant to me :)


> How random is the third party? What epistemic frameworks do they find acceptable?

Any of the 8 billion people, who are not Alice and Bob, passing by... who, largely, don't even care about the concept of 'epistemic frameworks' in the first place.

Writing an explanation that only is convincing if the reader already believes in 'Bayesian Math' in the first place seems redundant, hence me pointing it out.


Right, but I don't feel this is a problem with bayesian reasoning.

If you want to _convince_ people you are in the domain of psychology, focus groups, marketing, social engineering etc.

An argument being convincing is not the same as an argument being valid.


Can you write down this logically valid argument, including its axioms?


Is this a good faith question? Or is it intended as a "gotcha"? I'm fine to spend a few minutes if you find this sort of thing fun.

I think I could, given the universe of people P, a set of belief frameworks E and a mapping B(belief) from P -> set(E). I would then try to say that my prior about B is that there's no E shared by all P.

Put another way, I think it's very likely there's nothing you can say that will convince everyone.

This would be an overly simplified model of reality because merely stating a valid argument in framework E held by some P is not sufficient to "convince" them. Also "epistemological frameworks" don't really exist, they're just a shorthand we can use to describe clusters of beliefs that often go together. Lots of other things about the argument matter, like how complex it is (reduces trust) or who is presenting it. Also, a given argument might be "valid" in multiple belief systems at once. Maybe we could add in the idea that a particular statement of an argument (call it A) which means different things in E1 and E2 is really two different arguments A1 and A2. How far you want to go with modelling is a matter of taste.

Do we disagree? Where? Maybe we can find a mutually acceptable model if you are interested.

If your overall point is that often people write complete garbage in the shape of a formal argument and then act like they've "proven" something - I 100% agree with you. We can't "mathematically prove" non-trivial things about reality but we can construct models and use tools to reason about them. I do believe that the activity of stating your model and then using tools can be helpful.


It is a good faith question if the argument actually exists, which you seemed confident in.

I'm not saying you have to write it down, just that the previous comments won't be persuasive if it's not.


It's only a problem if you want to use Bayes to take some credences + evidence and prove them irrational (indep. of prior).

Bayes is great for verifying self-consistency: given some priors and some evidence, it produces exactly one set of credences. If you've somehow got a different set, you've gone wrong somewhere (and can be Dutch-booked).

What it won't give you is a full theory of rationality--but IMO this is not a problem with Bayes in particular. No theory will. There /must/ always be some free variable that prevents landing at exactly one set of credences. All theories that disagree come with very strange (and not very believable) implications.


Eh, I wouldn't call it an issue with Bayesian math itself. Just... it's an issue with reality.

At least with bayesian math you can make your priors explicit, so we the reader at least have the possibility of disagreeing with it.

If you really had no idea you can say it's 50/50 chance and plug the numbers.


How does frequentist deal with the same problem (a possible lab leak of virus)?


I dont think the first part of this article is quite right.

1. I doubt many people would think of 9/11 as a "1 in 50 years" event if it didn't actually happen. If you had a year-by-country level dataset of every developed country post ww2, you'd have thousands of observations but none would have as many fatalies from terrorism as the US in 2001.

2. If a genuinely super rare event occurs (like one in thousands), its often more reasonable to think that theres been some fundamental shift in the world that you failed to recognize, rather than that you just got super lucky or unlucky to have lived through it.


I think your second point helps make sense of my problem with the not 1, but 2 threshold articulated.

> if it happens twice in a row, yeah, that’s weird, I would update some stuff

Why? Why is one event not an “update”, but 2 is? Shouldn’t each data point change your assumption proportional to the assumed chance and period of observation?

That seems more true to the author’s belief framework, but it wouldn’t make as as spicy a title.


I think "in a row" is the important part of that sentence.

A single crash of a 737 Max 8 is not an outlier, but two crashes in a row of the same model of an airplane definitely is.


737 Max 9s are currently grounded due to a single incident; should the FAA have waited for a second door to blow off?

Reading that myself, it sounds like a gotcha question, but under this entirely arbitrary 2-but-not-1 threshold, the answer seems like it should obviously be yes.

The rational framework the author is advocating for is all about probabilities and percentages, so it seems like a weird exception to carve out that there’s some hard line between 1 and 2 event occurrences. I doubt he would hold fast to it if pressed, which is fine.


> Reading that myself, it sounds like a gotcha question, but under this entirely arbitrary 2-but-not-1 threshold, the answer seems like it should obviously be yes.

I think that's because the question implicitly assumes that the threshold applies for all things and all purposes. It doesn't. First the threshold is about adjusting your baselines, and second even if the threshold were for when to pull the "stop everything" cord, it all depends on your specific goals. The FAA might have completely different goals and targets than someone else in some other industry. Or to put it another way, the FAA has grounded all 737 Max 9's after a single incident with no fatalities. As of Jan 16, 11 people have been killed in homicides in Chicago. By the same "one threshold for everything", the entire city should be on complete lockdown until such time as it can be made safe by the proper authorities.

On the other hand, if you assume that one could have a very low "stop the world" response threshold for "sudden mechanical failures leading to explosive decompression" of planes, and simultaneously have a higher "stop the world" threshold for "people dying in Chicago", then it seems completely reasonable that one could have a third different threshold for the number of mass shooters that come out of any given arbitrary social clustering that trigger the "I should re-evaluate whether these people are entirely sane" routines in your brain.


Thank you for phrasing it like that. I think maybe my hang up is that the author doesn’t allow that different people and groups can have different baselines and update responses. Implicitly it seems that if everyone is perfectly rational, all responses to an event would be the same, but that’s not really true due to our subjective human experience. In the shooting example the quote came from, the Left and Right he caricatures have different priors, knowledge, experiences, and motivations than the author, someone who likes to think of themselves as rational and aloof of politics. Of course they’re going to have a different response, but that doesn’t necessarily mean they’re acting irrationally.


It’s quite different. The incident with the door heavily implies a problem of airplane design. It makes sense to ground it.

What we knew at the time of the first crash of max 8 seemed to imply a pilot error. It wasn’t statistically significant. Only when another max 8 crashed soon after (I think soon enough to say “in a row”) was the max 8 grounded. If the second crash occured years after it wouldnt be significant.


I enjoyed reading this and think it makes some important points. There's an important way in which I mostly agree with it.

However, in some important ways it's missing the point, and illustrates the importance of utility and not just probability.

Maybe discovering the truth about whether COVID was or was not a lab leak shouldn't shift your posterior of it occurring again. But given the costs of the outcome, the per-unit-probability weight of that shift is arguably pretty big. How much you should learn from something probably depends on the benefits and costs of everything? That's a very non-probabilistic judgment.

I generally agree with the idea that there's a level at which whether or not it was a lab leak probably doesn't matter at all: it already tragically happened, it should maybe be a wake up call to the plausibility of it even if it wasn't what happened this time, there are other problems with the lab leak discussion, and so forth. But at another level it very much does matter: there's arguably a moral responsibility possibily attached to it, and a slight change in probabilities might have enormous utility costs, if the outcomes are bad enough (like basically threatening the world order).


[flagged]


After they published that piece (which was misleading and deeply unethical, not just 'critical', by the way), he wrote this:

> I have no particular call for action. Please don’t cause any trouble for the journalist involved, both because that would be wrong, and because I suspect he did not personally want to write this and was pressured into it as part of the Times’ retaliatory measures against me.

(https://www.astralcodexten.com/p/statement-on-new-york-times...)

Before that, he did ask for public commentary on their plan to reveal his real name; maybe that's what you were thinking of.

(Before your post was edited, it was originally a comment that misrepresented the history of the NY Times piece by claiming he 'sent a flood of internet crazies' at them after they wrote it.)


His name was already public. I'm not interested in hashing all this out again though. That's why I edited my comment. I would have deleted it but it was too late. I implore people to go read the original NY times article. It's a good critique of the SV hivemind and even discusses Sam Altman.

> I suspect he did not personally want to write this and was pressured into it as part of the Times’ retaliatory measures against me.

This line sets off so many of my BS detectors. The journalist wasn't pressured; there's no campaign against him there. His ego is getting in the way of his supposed rationalist thinking.


I don't think that probability theory provides a useful framework for decision making.

Let's say I, a dictator, interrogate the casino manager on the suspicion that the dice are rigged. I flip 100 times I get all 6's. Do I kill the guy or not? Probability cannot help. I need to measure the dice weights myself.

As an insurance should I insure a house for $1000? Again probability cannot help when an earthquake demolishes the entire state and I have to write a check for every damn house I Insure. I actually need to keep in the bank amount equal to the price of each house

I am now inclined to believe that probability is a nerdy way of lying to our educated selves.


> Again probability cannot help when an earthquake demolishes the entire state and I have to write a check for every damn house I Insure. I actually need to keep in the bank amount equal to the price of each house

I’m not sure what you mean. Insurance companies don’t hold liquid capital sufficient to cover 100% of their active policies paying out simultaneously.


See also: Reinsurance

> Reinsurance is insurance that an insurance company purchases from another insurance company to insulate itself (at least in part) from the risk of a major claims event.

https://en.wikipedia.org/wiki/Reinsurance


> As an insurance should I insure a house for $1000? Again probability cannot help when an earthquake demolishes the entire state and I have to write a check for every damn house I Insure. I actually need to keep in the bank amount equal to the price of each house

Not really, no. Actuarial tables can be quite amazing when they're accurately applied. All you need to do is structure the policy so that it is unlikely to pay out in such a situation, or has a limited payout. That's easy to do: you just make a list of exclusions such as "Acts of God" or war, or terrorism, etc. So basically, any instance of mass destruction will fail the coverage and you pay out $0.


Oh no, under my rule insurance companies don't get to cherry pick what they will cover. We have banned belief in acts of God.


Chances are there might not be many viable insurance companies in that scenario.


The probability of you getting 100 6's in a row on a fair die is dozens of orders of magnitude lower than your guy testing the die either getting it wrong, or lying to you for any number of reasons.

As for your second example, the problem is your own naive assumption of independence of all events. It is an extremely popular naive assumption, one I have ranted about how the school system accidentally teaches this more than once, but nevertheless is not a problem with the concept.

I think you have correctly identified that your understanding of probability is wafer-thin. That's actually progress, a good thing, and not an insult. But you've misattributed the problems to probability rather than your understanding.


Find me one person who you think he has good understanding of statistics and I will find you 10 that they think he is a complete idiot.

Our community is great at insulting each other :)

Btw for the first example, it is a deterministic one. You can count molecule by molecule and save the casino owner.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: