Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is some really interesting interplay here between forecasting and decisionmaking. (Taleb would have a lot to say here, along the lines of "forecasters are poor.") Maybe it makes sense that forecasts should be measured, but decisions should be, well, decisive.

A good Bayesian should be able to make confident decisions based on information available at the moment, while acknowledging that lack of information is leading to suboptimal decisions.

For example, a leader can be absolutely confident that shelter-in-place is the best decision based on the available information, while acknowledging that there is missing information that would drastically change this assessment.



A good Bayesian should be able to make confident decisions based on information available at the moment...

No.

A good Bayesian should be able to come to decisions like, "I am 70% confident that Osama bin Laden is in that compound." While the Bayesian next says, "I am only 50% confident that Osama bin Laden is in that compound." With both knowing that there is a difference of opinion, but no disagreement on basic facts or reasoning method.

It is very rare for a good Bayesian to be absolutely confident of any prediction. And if you are often so confident, you're probably not thinking very well. I mean that quite literally - the process of analyzing probabilities well requires being able to make the case both for what you think will happen, and what you think won't. Because only then can you start putting probabilities on the key assumptions.

For example, a leader can be absolutely confident that shelter-in-place is the best decision based on the available information, while acknowledging that there is missing information that would drastically change this assessment.

Really?

https://news.ycombinator.com/item?id=22750790 is a discussion that I was in recently about whether on a cost benefit analysis it is better to crash the economy by shutting things down, or to keep things open and let lots of people die.

The decision wasn't nearly as clear in the end as I would have expected it to be. (That all options are horrible was clear. But we knew that.)


> No. A good Bayesian should be able to come to decisions like, "I am 70% confident that Osama bin Laden is in that compound."

That's not a decision though. That's an assessment, what I would put in the same category as predictions. A decision would be whether or not to bomb the compound.

Your post seems to miss my point, that predictions and decisions are very different and one can be uncertain about predictions while being certain about decisions. For example, I completely agree with this:

> It is very rare for a good Bayesian to be absolutely confident of any prediction.

Here's a simple example to think through the difference. You have a sophisticated weather model that predicts 40% chance of rain today. You hate getting wet, so you take your umbrella. In fact, you would take your umbrella even if the chance were only 10%.

So you are really uncertain about whether it's going to rain (your forecast), but absolutely certain that taking your umbrella is the optimal decision given the information at hand.


You are right that I had not paid close attention to decisions.

I see no particular reason why Bayesians should be better at being decisive. They should make better judgments given the available information. But they are not necessarily any better at making decisions and moving on.


Expanding on this, the skill of figuring out the odds of bin Laden being in the compound is unrelated to the skill of figuring out how to handle both outcomes, and whether that is a worthwhile risk to take.

So a good Bayesian can inform a good decision maker, but the Bayesian is not necessarily a good decision maker.

Similarly in the book, one of the superforecasters made the point that listening to well-informed experts who might be bad at making decisions was very useful. Because the expert really did have a good grasp of the current situation and could explain it clearly, which was a great starting place for the Bayesian who lacked background. Preparing background and making predictions are both required, but the combination of skills need not start in the same brain.


Fundamentally, decision-making is what predictions are for. We mainly care about information to guide our actions. There are some interesting implications of this for how we should do research.

https://www.gwern.net/Research-criticism#beliefs-are-for-act...

A recent example: people have been talking about clinical trials for coronavirus vaccine candidates. In those you want to minimize the bad things that happen to the people in the study, and also get a working vaccine rolled out to the world as quickly as possible. Therefore you might want to accept unusually low levels of certainty that the vaccine is safe, or ramp up trial size faster than usual, because the world is on fire and every day of delay is terrible. For other vaccines with smaller expected benefits, slow-and-cautious might be the way to go. In both cases it's a matter of balancing expected risks with rewards as your probability estimates change over time.


Yes, exactly. For example, I would think there would be immense value right now in sampling totally random subsets of the population and testing whether they've had COVID-19 in the past. This is the kind of information that could radically change decisions about when to release stay-at-home orders.


I disagree on that.

The purposes of sampling are to find out how deadly the disease is, and to find out if herd immunity exists. But we have good evidence that it is likely to be deadly enough to justify stay-at-home orders while community spread exists. And have strong circumstantial evidence that only a small fraction of the population has had it.

Therefore breaking quarantine for a random subset of the population is unlikely to change actionable decisions. But it will be likely to spread the disease. I would love to know the answer to the question raised. But it isn't worth human lives to answer it sooner than it will otherwise be answered.


> But we have good evidence that it is likely to be deadly enough to justify stay-at-home orders while community spread exists.

I don't think we agree on that and it's relative. Not all people agree on stay-at-home/confinement orders. I personally see them as an authoritarian measure and I think everyone should judge for himself whether he wants to expose himself to COVID-19 risks or not.

As to the point of determining if COVID-19 is deadly enough or not, I don't see how we can do that without sampling the society. It's not clear, right now, if the only cases are the ones that are diagnosed or 50% of society.


I don't think we agree on that and it's relative. Not all people agree on stay-at-home/confinement orders. I personally see them as an authoritarian measure and I think everyone should judge for himself whether he wants to expose himself to COVID-19 risks or not.

It is in the nature of public health that "everyone should judge for himself" guarantees epidemics. Because like it or not, the choices that you make for yourself affect me. You may decide that you'll survive so you don't alter your behavior. But that spreads the disease and makes it more likely that my immunocompromised sister dies.

The result is that public health provides the most clear-cut cases where we have to choose between individual rights and the public good. But we are loathe to make that choice. Therefore it presents us with a series of easily debated moral quandaries.

See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2267241/ for some of the relevant history.

As to the point of determining if COVID-19 is deadly enough or not, I don't see how we can do that without sampling the society. It's not clear, right now, if the only cases are the ones that are diagnosed or 50% of society.

Both extreme statements are exceedingly unlikely.

I had based my comment on published articles estimating an infection fatality rate of 0.4%-1.4% with a best estimate around 0.66%. But the full story is complicated. Work your way through https://www.cebm.net/covid-19/global-covid-19-case-fatality-... to understand the current data, estimates, limitations of various research and so on. It is..messy.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: