Hacker News new | past | comments | ask | show | jobs | submit login
Huffington Post's criticism of 538's election forecast (huffingtonpost.com)
9 points by jmount on Nov 7, 2016 | hide | past | favorite | 8 comments



""" As a financial analyst at an investment bank, or a research analyst at an economic consulting firm, your job would be in serious jeopardy if you produced 538’s model output without a clear explanation of how those fat tails that represent an inordinate number of close to impossible scenarios could actually occur. A model like that just isn’t client-ready. Time to re-think those assumptions! """

This line of reasoning irks me -- If the outputs don't agree with with my preconceived conclusions, the model must be wrong. (Rather than: Maybe my preconceived conclusions are wrong.)

I don't have an opinion on the 538 model, not having analyzed the internals. But I don't like the idea of criticizing a model because you don't agree with its results.

On the bright side, the election will shortly be over and we'll have at least some measure of how accurate each model (538, HuffPo, etc.) actually was.


Ther are cynical observations to be made about kurtosis and professional incentive structures for analysts and economists employed by investment firms.

However if his model performs with the precision of last time then the confidence intervals might be criticized as too wide.


Why use student t distribution? Why not ask Nate Silver:

"This mostly makes a difference for very low-probability events. For example, for an event that a normal distribution regarded as a 1-in-1,000 chance, our t-distribution would assign odds of 1-in-180 instead, making it about six times as likely. A t-distribution is appropriate in cases like presidential elections where you have small sample sizes."

http://fivethirtyeight.com/features/election-update-why-our-...

Anecdotally, the write ups at fivethirtyeight about the uncertainty in the data this year, and the effect on the forecasts, have been great reading. A few minor polling errors in a few key states could have big implications for the result. The fivethirtyeight model shows this, many others do not.


Yea, not buying it.Nate Silver has been defending his updates way more objectively than the huffpo has ever been.


I think Huffington Post is both invested in the outcome (wants one candidate to win) and feels they have to explain why their forecast differs from one as popular as 538's. All the same I found the article interesting.


they loved it until Hillary was losing


Probably. They're right 538 has been all over the map this election cycle though. 538 never thought Trump would win the nomination, or that Sanders would surpass 15%, for instance.

I think the problem is in part because polls are relatively flawed. They can't really account for stuff like activism or excitement. For example, a poll may ask me "who will you vote for tomorrow? Trump or Clinton?" And I'll say, "Ugh, Clinton, I guess." And then I won't show up for the vote because I don't like Clinton and my favorite TV show just appeared on Netflix.

The other part is that 538 may try to account for all of these extra factors in their model - but still do it poorly and fail big time.

This sort of stuff gets worse the more "unpredictable" and crazy the race is, which is why 538 failed to predict Trump and Sanders' rise. When it's more predictable and everything is accounted for and everyone is where it should, that's when 538 seem like Nostradamus and get 99% of it right.


He got famous for predicting tomorrow would be like today.

If you said "no change" when he got 50 / 50 you would have got 48 / 50.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: