Hacker News new | past | comments | ask | show | jobs | submit login

I don't think this article provides strong evidence that they are well calibrated on the presidential election specifically (sample size N=3), or that they are correctly accounting for rare black swan events, but it does seem to imply that the criticisms about "538 claims victory no matter what because they always have non-zero probabilities" are oversimplified.



2016 wasn't a black swan event. It was a polling error, which do happen, if rarely. It was not unforseeable, and 538 included a probability of that happening which is why they gave Trump higher chances than most others did.

And 538 does do backtesting on elections back to 1972. That's not particularly trustworthy since it invites over-fitting, but internally they do have a little bit more than N=3 to work from.


(I have fairly minor quibbles with some of Nate's modeling ideas, but I broadly mean to be defending him).

I don't mean to imply 2016 was a black swan event- I agree that ~30% was probably as accurate a take as could be achieved (most evidence that seems reasonable to use indicated a lead for Clinton, but that it wouldn't be that surprising for that lead to be overcome). I just mean that the model assumes a fairly normal election environment, without like a huge attack on Election day or something on election day.

The N=3 comment was meant specifically for evaluating their calibration, not the data they use for their model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: