Hacker News new | past | comments | ask | show | jobs | submit login
Presidential Plinko (presidential-plinko.com)
92 points by UncleOxidant on Oct 6, 2020 | hide | past | favorite | 71 comments



This is a pretty good idea! The 538 people have mentioned before that they've had trouble with people misunderstanding probability (many people, apparently, believe that things which have a 20% chance of happening will never happen and those with an 80% chance will always happen). Their current solution to this problem appears to be some sort of cartoon fox, but I think this illustration works better.


> many people, apparently, believe that things which have a 20% chance of happening will never happen and those with an 80% chance will always happen

It could be the counter-intuitive nature of probability, but I wonder if it might sometimes just be that, psychologically, people have trouble accepting the idea that nobody really knows.

You look at a site like FiveThirtyEight, you know the site is run by an expert, and you know that multiple sources of data have been fed into number-crunching computers. You've seen other situations where this type of process nails it. You have a strong desire to know the answer, and that biases you toward believing that you do. The idea that something so important is unknown is unpleasant, so you reject it.

About the difficult of grasping the basic concept, if you've ever seen the TV show Card Sharks (https://en.wikipedia.org/wiki/Card_Sharks), a key part of the game is that contestants see a card which is face up, and they have to make bets based on whether the next card drawn is higher or lower. In that context, they have no trouble understanding that if the face-up card is a 5, the next card is probably higher (6, 7, 8, 9, 10, J, Q, K, A) but it could also be lower (2, 3, 4).

Essentially, I think people can understand that unlikely things happen, but where they have difficulty is seeing/admitting that this is one of those situations.


> if you've ever seen the TV show Card Sharks...

I developed and taught an entire undergraduate math class on the mathematics of game shows.

https://people.math.sc.edu/thornef/schc212/gameshows-428.pdf

Much of the class I worked out probabilities like these, and compared what contestants "should" do to what they actually do.

Watching reruns of Card Sharks, The Price Is Right, and others, I've found that most people have a surprisingly good intuitive sense of probability. Not always, but more often than not.


Except for that damn Monty Hall problem!


I'm not sure it's that simple, because even _after the fact_ people were talking about how 538 "said Clinton would win". They're doing so in this comments section, in fact.

I also seem to remember seeing a study that people made better probability-based decisions if given probabilities phrased as fractions than percentages; I wonder if something similar is going on with the cards example.


I think what people sense -- and this is related to the "Bayesian vs. frequentist" debate -- is that an election is not really a random event. The source of uncertainty is different.

The Card Sharks example is a genuinely random event. If you draw a random card, it will probably be higher than 5, but it could be lower.

Whereas an election is not truly random. By election day the outcome is essentially foreordained. The "randomness" that 538 is trying to model are things like uncertainty and unreliability in the polls. What is the relationship between what voters said they will do, and what they actually will do? You can make inferences based on the past, but every election is different, and the mood of the electorate is definitely unique this year. Nate Silver knows from his experience to hedge his bets, as he did in 2016.

It's much more complicated and subtle than a genuinely random event. I find it understandable that people get confused.


Another area of complexity in presidential elections is the electoral college. People see the headline polling numbers and assess them as if it were a straight popular vote with a simple outcome, when I'm reality the end result is highly dependent on where votes are cast.


The first non-news graphic on Five Thirty Eight is literally a bunch of balls, colored red and blue. Short of actually including this type of auto-playing animation (which is perhaps the solution, but has its own issues), I'm not sure what else they could do.

People are bad at dealing with uncertainty. And also don't read.


I feel like the plinko type animation might be better, as it makes it really, really clear that, yes, the improbable thing can happen.


> The first non-news graphic on Five Thirty Eight is literally a bunch of balls, colored red and blue.

No, it is literally a set of maps of different outcomes of simulations representing (in a reduced way, because there's only room for 22 maps, at least on desktop; I think they have fewer on mobile) the range and frequency of simulation results.

The balls come after that.


I think the cartoon fox was just because they need a graphic. The real way they are combating people's inability to interpret percentages is by dropping them lower down, and replacing the top line prediction with the word "slightly favored" or "favored" or "strongly favored".


Well, the cartoon fox also provides narration. But yeah, the rearrangement of the page and better infographics is probably the biggest change.

I'd be kind of curious to see if they have any evidence that it actually helps. It seems like it... might? But it's always hard to tell with these things.


Probably the best way to figure it out is to analyze reddit/hn for comments like "538 says Biden will win!" vs "538 says Biden might win go vote"


There is, as the statisticians would say, a confounding variable: we all lived through the 2016 election.


The crazy thing though is that statistics also contributed to the loss. There was a campaign to discredit Clinton so many Democrats weren't happy about her. Seeing statistics in her favor many Democrats decided to either stay home or vote 3rd party. This also give extra opportunity to tilt scales in 3 important states to win the entire election.

Moral of the story is, one should never assume that their candidate winning in polls is an excuse to not vote.


Apart from xcom players, to whom 90% of something working means almost certain disaster.


Some games actually do use fake random number generators that behave more like people intuitively expect randomness to behave; if there’s supposedly a 5% chance of something happening it will NEVER happen twice in a row, and so on.


I'm not convinced people actually have trouble misunderstanding probability. I think it's more likely that some bad faith conservatives said "look at how stupid the liberal media is, they said Hillary couldn't lose" and people actually believed that without checking what 538 actually said.


The cartoon fox is really infantilizing. I know probability is tricky and sometimes counterintuitive, and I appreciate being taught things, but don't treat me like I'm in kindergarten.


> The cartoon fox is really infantilizing. I know probability is tricky and sometimes counterintuitive, and I appreciate being taught things, but don't treat me like I'm in kindergarten.

Have you considered that you are quite possibly both more educated and more intelligent than the people whose experience the cartoon fox is designed to improve. Sure, I don't need it, but I haven't even noticed it since the first time I went to the forecast page this cycle.


I think this says a lot more about you than it does about the cartoon fox!


i have to imagine it would piss Nate Silver off to experience 4 years of continual mockery for "getting the election completely wrong" when he in fact got it right.

anyway, I agree the cartoon fox seemed very condescending and I wonder whether there wasn't some real frustration with the public behind it.


This is fantastically awesome!

What people often miss is that a statement like "X wins with 75% probability" means that X's supporters have a very solid chance of losing. The odds (and risks) in the long run aren't the same when there is no long run, just one experiment.

My recommendation would be to change the script so that "dropping all balls" stops after one run, and the button "drop one ball" gets highlighted after that, encouraging the user to click it.

Because it is then that you get to feel how real the chances of ball ending up on the right side are, even if most balls end up on the blue side.

We all only get one drop.

----------

Personal plug: my 3D-printed Galton/Plinko board[1].

If anyone's interested, I can share the STL file or OpenSCAD code.

[1]https://i.imgur.com/O4ZqOVR.jpeg


I thi to it would work better if it actually showed one ball drop at a time, instead of just quickly generating the same bar/scatter graphs that the original sites show.

The single ball drop, with slower motion, gives you a chance to feel the uncertainty.


270 To Win has a simulation that fills in the electoral map one state at a time: https://www.270towin.com/2020-simulation/

They even have a mode that fills in the states in roughly the order of poll closing times, which helps to get your head around how election night is likely to go. (Except it'll probably be election week and poll closing times will matter less than how fast various states count mail-in votes.)


There's an option to do that a little lower down


This, but I wish it defaulted to that instead of starting with all the balls at once.


Indeed! The multiple balls may be thought of like ballots/points, leading to the wrong intuition that “more balls one side, that side wins”. But that’s exactly the misinterpretation of probabilities/multiple-trials such demos are trying to fight.

Perhaps, whenever many balls shown at once, all but one should be faint/outlined or even X’d out?


Thanks for making this. It’s silly and fun and a bit educational. Plinko was the best part of being sick as a kid.


This is awful because the balls are not being dropped in the center. Instead, the color of the end slots should change based on probabilities. The skewing of the entry point implies a bias in prior distribution, which is not the right message.


> the balls are not being dropped in the center

They are being dropped from the center of the probability distribution, not the center of the electoral vote threshold (which would be nonsense).

> skewing of the entry point

Skewing from what?

> bias in prior distribution

That's the point!

> color of the end slots should change based on probabilities

The height of the bar changes based on the probabilities.


Right. I think a more layperson interpret-able visualization would be to have what appears to be a "fair" (centered) sampling, and show the results based on the likelihood of landing in Trump/Biden.

A biased sampling with uniform slots at the bottom shows the same thing, yes.


I believe the idea is:

1. We’re dropping the ball at the position the polls (and our models) indicate.

2. The balls may end up in a different place due to any number of unforeseen factors (the pegs).


I wouldn't straight up call this 'awful,' but I do agree that probability should be expressed by the color of the end slots.


Why would this make any sense?

The probability is expressed by the number of times a ball ends up in each of the end slots. That's the whole point!


Looks like we've not learned anything from 2016.


That's what I came in to say. If it happens again I think we can conclude that at this point, the people voting and the people getting ahead looking polls (say on the internet, random calls, etc..) are not the same people.


On a national level, the polls really did, well, not amazing, but fine, last time. 538's final polling average was Clinton+3.8, actual result was Clinton+2.1. That's not at all bad; you wouldn't call that an upset.

The real problem was a lack of high-quality state-level polls, and in one or two cases major misses on what state polls there were; there wasn't all that much visibility on many states.

And ultimately, the final result was so close that it was below the margin of error of any reasonable poll to detect. Even assuming the highest quality polls conducted in each state, every day, the best they'd have been able to say would have been, pretty much "It's 50/50", in retrospect.


And the state level polling errors were highly correlated, which 538 called out in their pre-election coverage, but people ignored.


> The real problem was a lack of high-quality state-level polls

That wasn't even really a problem; the error in state polling was pretty consistent with the error in national polling; the problem with predictors other than 538 is they treated state variations from polling averages as independent, 538 correctly assessed them (based on past evidence) as highly correlated, which is why Trump had a nearly 1 in 3 chance in 538s forecast.

The problem is people taking (honestly or just for the purpose of after-the-fact criticism) "1 in 3" to mean "absolutely won't happen".


Let's say the odds stay where they are, and Trump wins again.

That would be hitting on a 30% chance in 2016 and hitting on a 20% chance in 2020. That's a 6% chance overall.

6% likely outcomes happen all the time. It would be a textbook example of "resulting" to draw a conclusion based on this.


I am not saying that. I'm saying that the polling data is so flawed because it's not getting into a specific democratic that it's off by much wider margin. So much so that if Trump wins again it's because it was a 40% chance he won both 2016 and 2020, we just don't have the polling data to prove that number.

So while you say 6% "happens all the time" - no it actually happens 6% of the time. But 40% for both terms would indicate more like he had a 70% last time and maybe another 60% this time. The pollsters can be correct for fucking California, but what good is that if they're super wrong because racist people in Alabama don't click or pick up the phone (or have no phone to begin with) or talk to pollsters.


You would need more evidence to determine that. 2016 + 2020 isn't enough -- you'd need to actually show that this is happening, rather than using 2 results to back into your belief.

To clarify, my intention was to state, "outcomes with 6% of odds happening will occur very frequently" - not relatively. A good hitter in baseball hits homers about 6% of the time. If they hit homers in 2 at bats, we would not have enough information to say much about their true talent.


[deleted]


> If the polls didn't match the votes in 2016, they probably won't again in 2020, for at least some of the same underlying reasons.

That's true only if you assume perfectly spherical pollsters :) After a real or perceived polling miss (in reality, most polls weren't off by all that much in 2016), pollsters tend to change assumptions, sometimes overcompensating. Notably, after a moderate polling miss in the 2015 elections, which predicted a hung parliament where in fact the Tories managed a small majority, UK pollsters went on to overcompensate in the 2017 election, showing a Tory blow-out win whereas in fact they ended up with a hung parliament.

(That one, incidentally, shows one danger of polls! The first poll allowed Cameron to safely promise a vote on the EU, because he assumed they'd be either out of government or in coalition with the libdems, who wouldn't allow it. In fact, they won and were forced to go through with it. The second one allowed Theresa May to call a snap election to gain seats so as to have enough spare MPs to be able to ignore the ERG and negotiate a semi-sensible Brexit. Instead, they lost seats, had to deal with the ERG and Unionists, and ultimately ended up with the current catastrophe. The current Brexit mess is at least in part due to polling misses.)

In the US, pollsters have started to pay a lot more attention to education, which was a major predictor in the 2016 election.


You're being downvoted but you're right.

I'm starting to believe that sites like 538 feed off of the general public not understanding that a 20% chance to win is 1 in 5. Endless ink has been spilled about how the public does not understand statistics but very little effort has been made to communicate effectively.

The whole thing seems like a navelgazing sideshow where the pollsters want to have it both ways. They want to claim their predictions are infallible but then when the public says they failed they want to backpedal with holier than thou "well, actually..." excuses.


> Endless ink has been spilled about how the public does not understand statistics but very little effort has been made to communicate effectively.

How, then, should statistics be appropriately communicated?

> They want to claim their predictions are infallible but then when the public says they failed they want to backpedal with holier than thou "well, actually..." excuses.

Do pollsters generally try to claim their numbers are infallible? If anything, the fact that margins of error are included in the results would seem to imply the opposite.


I don't buy that they're trying to communicate in good faith. I remember very clearly sources like this* in 2016. There was so much hubris and minimizing of Trump that him having a 30% chance to win was like an irrelevant footnote to Hillary's coming coronation.

*https://pbs.twimg.com/media/Cwc-j6YXUAEIMaM.jpg

The pollsters take too much credit when they're right and deflect too much blame when they're wrong. They need to do a better job of being humble and stop trying to pass themselves off as apolitical number crunchers just giving us the facts.


> The pollsters take too much credit when they're right and deflect too much blame when they're wrong.

You are confusing pollsters with "forecasters using data from pollsters". These are not the same people.

Many of the forecasters in 2016 were very bad because of naive assumptions about how polls related to election results, particularly many making the assumption that polling errors were independent between states (also, lots did a really poor job of poll aggregation before those naive models.) The reason is Nate Silver and 538 had made a name for themselves in the preceding couple of cycles, and lots of people who didn't understand the process but saw the outcome decided they could do the same thing, because, hey, how hard could it be to compile a bunch of polls and model outcomes based on them?


Read the 538 forecast on 2016 election night: https://fivethirtyeight.com/features/final-election-update-t...

In particular look at this (which is almost exactly what happened):

But if there’s a 3-point error against Clinton? That would still leave her with a narrow lead over Trump in the popular vote — by about the margin by which Gore beat Bush in 2000. But New Hampshire, which is currently the tipping-point state, would be exactly tied. Meanwhile, Clinton’s projected margin in Michigan, Pennsylvania and Colorado would shrink to about 1 percentage point, while Trump would be about 2 points ahead in Florida and North Carolina. It’s certainly not impossible that Clinton could win under those circumstances — her turnout operation might come in really handy — but she doesn’t have the Electoral College advantage that Obama did in 2012, when he led in states such as Ohio and Iowa and had larger leads than Clinton does in Michigan and Pennsylvania. In particular, Clinton could be vulnerable to a slump in African-American turnout.


> I don't buy that they're trying to communicate in good faith.

What would "good faith" communication look like, then?

> I remember very clearly sources like this* in 2016.

Can you elaborate on what exactly is wrong with that source?

(That isn't a pollster, too; that's a (meta?)analysis/aggregation, but that's a relatively minor nitpick)

> They need to do a better job of being humble and stop trying to pass themselves off as apolitical number crunchers just giving us the facts.

Are they themselves claiming that they are "giving us the facts", or is that how they are being represented by others?

And are you talking about actual polls, or about predictions based on said polls?


I disagree; I think that 538's presentation this time around is specifically geared toward avoiding these sorts of misconceptions. And I think that's really hard to do, so I imagine there will still be quite a few people who don't get it and think that a 20% chance means something that will never happen.


538 isn't a pollster. It tries to aggregate polls to make overall predictions.

It does a pretty good job - it gave Trump nearly 30% chance of winning in 2016 and given his small margin that seems reasonable.

Sites like Huffington Post which gave Clinton 99%+ chance are the ones which should be criticized.


Running javascript provided by a site that's not from an authenticated source is... a pretty significant risk to certain people.

To the author: please turn on TLS for your site. It's free.


What's the risk? And what certain people? And how does knowing that said JavaScript is legitimately from "presidential-plinko.com" reduce that risk?


To know that your ISP / government / local MITM on your wifi didn't inject malicious JS? This has been a thing approximately forever https://arstechnica.com/tech-policy/2014/09/why-comcasts-jav...


And, to the question of "What certain people?", the answer, in the general case, includes Chinese dissidents:

https://www.zdnet.com/article/china-resurrects-great-cannon-...


nice visuals, but these polls and simulations were wrong in 2016, and are likely wrong now.


What does wrong even mean in these contexts? You can't reliably evaluate a probability model with n = 1 test cases.


A poll is a fact. It's not wrong.

A model that gave Clinton a 70% chance of winning is not proven wrong if she loses.

We can't know if the model is right or wrong unless we run the election many times.


Ok, here is a better word - "garbage", as in Garbage In, Garbage Out.


How is it possible to miss the point this hard? The website was specifically designed to teach people not to make the mistake in understanding that you are making.


Are you aware of the adjustments that were made by pollsters and meta-pollsters like 538 since 2018? Also, it's not like they were wrong in 2016 -- 538 had a 30% chance of a Trump victory.

I expected better statistical understanding from this audience, but maybe that only underscores how difficult it is to get people thinking correctly about probabilities.


The polls were right in 2016 in that Clinton did win ~3Million more votes than Trump. The problem was with polls in some states that ended up being crucial for an electoral college win. Hopefully 538 (and other similar organizations) have learned something from that and have made some adjustments.


That seems outrageously obtuse. The electoral college is fundamental to predicting the US presidential election. The idea that 538 got it right but just didn't factor in the electoral college is a real stretch. If they screwed up polls in states that were crucial for an electoral college win then they screwed up full stop.


538 et al don't, as a general rule, run their own polls. In 2016, some swing states were poorly polled, so they were flying kind of blind on those.

And, of course, a few tens of thousands of votes different on the day, below the margin of error of any poll, and no-one would be bothered about them. In a way, 538 can't win here; they said "there's a 30% chance of this thing happening", it happened, based on differences below the resolution that they could even see at, and everyone's annoyed with them.


If differences below the resolution that you can see can cause the opposite outcome to happen, then perhaps an estimate of "30% chance of this thing happening" should be paired with a "1% chance of that estimate being correct".


Especially given the fact that "538" (electoral votes) is right there in their name.


How can you possibly say they got it wrong? What evidence do you have that Trump didn't have a 30% chance of winning? Your sample size of 1?


> Presidential Plinko takes that uncertainty and translates it into Plinko boards with similar uncertainty, so that you can experience just how uncertain predictors’ forecasts are.

Nope. To fully demonstrate how uncertain predictors’ forecasts are, you need to animate a few meteorites crashing into the Plinko board, one said meteorite with Hillary's emails on them. And then go read The Black Swan, especially the chapter where one of the reference figures is a Plinko board.


Huh, I forgot about the Comey e-mail investigation October surprise (thanks a lot, Comey, you....). I wonder if Trump or Putin has any Biden surprises for this month...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: