Hacker News new | comments | show | ask | jobs | submit login

FYI I edited my post to mention the issue about the coins (your first point) shortly after submitting it. I'm guessing you read the non-edited version.

> Proper Bayesian result reporting doesn't say "We believe that the coin is biased". We would rather say "The probability that this coin is biased is 60%, subject to our assumptions and model".

I'm not really sure what you're getting at here. None of the coins are biased, by premise, so they shouldn't be concluding either thing.

If you throw in "if our model and assumptions are right" then you can shift the blame (if they assumed their stopping rule was OK, or came up with a model that says it's OK). But I'm not sure how that substantively helps.

Will check back tomorrow for further comments from you.




xenophanes,

I've been thinking about the problem a lot today. I'm pretty sure that my point is basically right, if the model is correct, but my ideas are not clear enough to explain it properly. Model correctness in Bayesian statistics is a complicated problem, and as far as I can tell, it's not a completely solved one. Bayesians usually agree about their calculations, but there's heavy debate about the "philosophy".

In any case, maybe you'll find Eliezer's other post insightful:

http://lesswrong.com/lw/1gc/frequentist_statistics_are_frequ...

I really hope to figure out model correctness, and this optional stopping problem looks a good vector of attack.

Thank you for the discussion, and sorry for leaving you hanging!

Cedric

(if there's any Bayesian out there willing to continue the discussion, my email is in my profile)


I was thinking it through more and I think it's the stopping procedures that might not halt that are the problem. You can have a data dependent stopping procedure if it's guaranteed to halt which makes sure all data does get counted.

For example of one that might seem bad, but does halt, and turns out to be OK:

Flip a coin until you have more heads than tails OR reach 500 flips.

This procedure will produce a majority of trials with more heads than tails, but I think the average over many trials will be 50/50. The conceptual reason is that stopping early sometimes prevents just as many heads as tails that would have come up after stopping. I haven't formally proved this but I did a simulation with a million trials with that stopping procedure and got a ratio of 1.0004 heads per tails which seems fine (and after some reruns, I saw a result under 1, so that is possible). Code here:

http://pastebin.com/H42qHYbA

With a guaranteed halt, a sequence of 500 tails and 0 heads can be counted. With no guaranteed halt, it's impossible to count a tails heavy sequence, which is not OK because it's basically ignoring data people don't like.

Does that make sense? I think it may satisfy the stuff you/Bayesians/Eliezer are concerned with. It means it's OK to stop collecting data early if you want, but you do need some rules to make sure your all your results are reported with no selectivity there.

There's also a further issue that these kinds of stopping procedures are not a very good idea. The reason is that while they are OK with unlimited data, they can be misleading with small data sets. It's like the guy who bets a dollar, and if he loses he bets two dollars, and if he loses again he bets 4 dollars (repeated up to a maximum bet of 1024 dollars). His expectation value in the long run is not changed by his behavior but he does affect his short term odds: he's creating an above 50% chance of a small win and an under 50% chance of a larger loss. If you only do 10 trials of this betting system, they might all come out wins, and you've raised the odds of getting that result despite leaving the long term expectation value alone. Doing essentially the same thing with scientific data is unwise.

BTW/FYI I believe I have no objections to the Bayesian approach to probability but I do think the attempt to make it into an epistemology is mistaken (e.g. because it cannot address purely philosophical issues where there's no data to use, so it fails to solve the general problem in epistemology of how knowledge (of all types) is created.)




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: