Hacker News new | past | comments | ask | show | jobs | submit login

Big data and big polling (Nate Silvers) took a major credibility hit in this election. People will be studying this for years.

Of course, it's not that big data itself was to blame, just the interpretation of it. Nate, not sure what to tell you but that was a significant miss in this election, going all the way back into the primaries.




Fivethirtyeight said the odds for Clinton where about 70-30 and trending narrower. Given how close the final win was I wouldn't really call that a miss. The miss where the models that where calling it 90+ for Clinton


And Silver had repeatedly stated that the reasons the odds were higher for trump in his models than in others was the much larger number of undecided voters than in past elections. It's somewhat apparent that a large chunk "undecided" voters were actually Trump supporters, perhaps afraid of being labeled as "deplorable".

Maybe the result of maligning (and I'm using the term loosely here) the opposition voters results in statistically significant polling errors (echos of Brexit). Maybe word clouds can be used to suggest a lean toward one side of the error? An embarrassment index of sorts? I dunno.

Obviously not really a win for Silver, but I'll give him credit for being more conservative in his estimates than other models. You work with the data you have.


well it simply comes down to they aren't much better than other groups and its most likely because they stopped being really objective. for those who have followed that site its not been fun to watch


Not true. Fivethirtyeight said repeatedly that Clinton's lead was within a standard polling error and that hence a Clinton defeat was about as likely as a Clinton landslide. They also pointed out over and over again, that the uncertainty in this election was higher than e.g. 4 years ago (Romney was given a 9% chance).

"To be honest, I’m kind of confused as to why people think it’s heretical for our model to give Trump a 1-in-3 chance — which does make him a fairly significant underdog, after all. There are a lot of ways to build models, and there are lots of factors that a model based on public polling, like ours, doesn’t consider.3 But the public polls — specifically including the highest-quality public polls — show a tight race in which turnout and late-deciding voters will determine the difference between a clear Clinton win, a narrow Clinton win and Trump finding his way to 270 electoral votes."

-- Nate Silvers on Nov 6

http://fivethirtyeight.com/features/election-update-dont-ign...


Which is still inaccurate. Trump didn't narrowly squeeze by to 270, he's on track to clear 300.


The number of electoral college votes is not a good measure of how close an election is. Like getting 50.5% of the vote in Florida rather than 49.5% increases your number of electoral college votes by 58.

A better measure of closeness might be how many votes would have to change to change the winner. I think by that measure this election is incredibly tight.


Right, but the prediction isn't about the "closeness" of the election in some ideal sense, it's about the distribution of electoral votes. That's why the site is called fivethirtyeight.


Yeah. That number is what, like 200,000 at most?


The electoral college system means most elections are pretty close if you consider how many votes could theoretically flip the result. About 70,000 votes would flip Florida this time, for example, and about 35,000 votes would flip Pennsylvania.

Of course, nothing is likely to come close to 2000, where changing a mere 269 votes would have changed the outcome.


In what way do you feel their so called lack of objectivity has leaked into their model?

Even ignoring the model they basically called out this exact possibility at least twice in very recent articles. One saying that people where underestimating the likelihood of Trump winning the electoral college, despite losing the popular vote. And another pointing out that Trumps was within one standard polling error of winning the presidency and people where shouldn't ignore error margins on the published polls.


On the contrary, FiveThirtyEight came off looking pretty good this year. Where other poll aggregators where giving Clinton very high chances, their models continued to maintain that Trump had something like a 1/3 chance. They also correctly identified the level of polling uncertainty in the rust belt, and how tight the race was in Florida (a state which continues to be the most important swing). There is this ongoing narrative that the polling wrong, when actually the results fit neatly into the level or uncertainty that the polls were suggesting. The same narrative had taken hold about Brexit polling, which is also wrong - the polls in the UK correctly indicated a close race.


Sure, the facts bear that out, but why let that complicate a good story? ;-)

I mean the public perception of data, analytics, pollsters, etc. is now being discussed in the press as something that blatantly misses the "human" element.


But the chances for a Trump win were still above 20% if I remember me well. So it's not really a big surprise that Trump won. These are the kind of the odds in russian roulette. Nobody would be surprised if someone got killed playing it while the most likely outcome is to survive.


Well Nate did predict back in May that the Cubs will win the World Series and Trump will win the election

https://twitter.com/NateSilver538/status/730251094614528000




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: