
Nate Silver correctly called 50 out of 50 states - btilly
https://plus.google.com/114613808538621741268/posts/LfoYJdWSK71
======
crntaylor
It's clearly great work by Nate and the rest of the 538 team. However, as has
been pointed out, it's not that hard to build a model that gets close to
predicting everything correctly.

I built a model in a couple of hours on Sunday afternoon, which simply takes
all the most recent polling data, takes an average, does a quick fudge to
adjust for the number of polls, and then runs 10,000 simulations to get a
probability for each state. The source is on Github:

    
    
        https://github.com/chris-taylor/USElection
    

and the predictions are in this gist:

    
    
        https://gist.github.com/4012793
    

The result? My model gets 50/51 correct if Florida eventually goes DEM (which
looks likely) or 51/51 correct if Florida goes REP.

\--

Edit: full disclosure - with all data up to 6 Nov 20120 it predicts Colorado
to be a toss-up, and I manually broke the tie in favour of Democrats, based on
earlier models favouring them in that state.

~~~
chaz
Nice work.

The success, though, isn't his specific model. It's that Silver was able to
market the idea that using statistical models is better than a table full of
talking heads at predicting an outcome. And that a "dead heat" doesn't mean
it's a coin flip. In hindsight, it seems almost ludicrous that it hasn't
gained greater traction before.

He deserves a ton of credit for championing analytics into pop/political
culture, but I'm sure we'll see many more models and a lot more statisticians
during the next election, some of which will be better than Silver's models
(if he's still doing them). I'm all for people asking to see the data instead
of the media sound bites.

To keep this HN relevant: it was the marketing that drove his startup success.
The product (the model) isn't quite perfect and an excellent substitute
product was created by someone else in very short order (you). But he put
together a great blog, grew out his brand, and eventually saw hockey stick
growth.

~~~
crntaylor
Yep, I agree with this. I really built the model for fun, just to see how
close it was possible to get with a few hours work.

A lot of forecasting problems have a structure like this one did:

\- There is a lot of publically available data, and simple statistical models
based on that data give pretty good predictions with very little effort.

\- If you're an expert with a lot of time on your hands, you can put a lot of
effort into improving the forecasts by 1-2%.

\- In the end, it comes down to the effective business use of your model. Nate
Silver had an excellent brand around his model.

In fact, I'd wager that he knows his model is more complex than it needs to
be, but part of his brand is "rogue genius with a complex model that's far too
difficult for mere mortals to understand".

~~~
_delirium
A slightly less cynical take on the last point is that it also lets him rebut
many objections with, "nope, the model already takes that into account". By
having a piece of the model devoted to most possible talking points:
convention bounces, whether undecideds break for/against the incumbent, effect
of the economy, etc., he can claim he's addressed those critics' points, even
if the net effect of addressing them is close to nil.

~~~
SkyMarshal
And an addendum, that 1%-2% accuracy is crucial in close elections like this
one, so it's more than worth putting the extra time and effort into getting.

------
ericdykstra
Nate Silver's calculations are certainly more sophisticated that a simple
time-weighted average of legitimate polling numbers, but are they more
accurate?

His baseball projection system, PECOTA, was _extremely_ complex, but barely,
_barely_ outperformed a simple 3-year weighted average with an age component
(Marcel), and some years was worse. Other projection systems that didn't take
player comps into account were better overall than PECOTA (CHONE being the
best, before the creator was hired away by a team).

~~~
Osmium
I think I speak for everyone when I say thank god politics is not like
baseball :)

But seriously, congratulations to Nate Silver. I think this vindicates his
method, at least for America. He was badly wrong with the last British
election, whether because of bad polling data or an insufficient understanding
of the system, so I was apprehensive going into this one. But it seems like,
for America at least, he's cracked it. I think it's inevitable considering the
absurd quantity of polling data available that someone would eventually.

~~~
bitcartel
2010 British Election (Number of seats)

Silver predicted: Conservative 312, Labour 204, Libdem 103 Actual result :
Conservative 307, Labour 258, Libdem 57

~~~
Graham24
103 for the libdems! that's one of the funniests things i've seen for ages.

how did he get it so wrong?

~~~
shardling
In reading his analysis of the election here, he repeatedly mentions that
having only two candidates makes things much more predictable.

In any case, we don't know if he got it "wrong" unless we know what % chance
he gave this outcome. (He tries to account for the likelihood that polls are
misleadingly biased, and that would correlate across multiple seats.)

------
pseut
So, Nate Silver seems to deserve the even higher profile he'll have after this
election, but his model is pretty explicitly _not_ predicting the outcomes of
any individual states. They're giving estimated probabilities of the various
outcomes, and the (presumptive) outcome that happened was given a probability
of ~20%, which was more likely than any other particular outcome, but a far
cry from "calling the states."

The big difference is that if this were to happen over and over again ("this"
meaning that the most likely predicted outcome actually materializes), that
would be (weak) evidence _against_ his model -- if the model's right, he
should "correctly call" the states only about 20% of the time. No more and no
less.

But good for him, because he's had a lot of unjustified criticism for the last
month or so, so he's due some (slightly) unjustified praise.

~~~
trhtrsh
Huh?

<http://fivethirtyeight.blogs.nytimes.com/>

"State-by-State Probabilities"

~~~
furyofantares
Take Florida, for example. He predicted a 50.3% chance of an Obama win. If
Romney actually wins it, he's only a little bit wrong - counting him as 0/1 on
Florida in this case wouldn't be accurate. If Obama wins it, he's only a
little bit right - counting him as 1/1 on Florida here wouldn't be accurate.

It's hard to quantify just how close he was. I give him props because his
prediction was that Florida was very close and that turned out to be true no
matter which way the state actually ends up going in the final result.

Now take Virginia, which was one of the later states to be called. He had
Obama at 79.4% chance to win. But it turned out to be reasonably close --
which makes me wonder if we really had enough data to justify such a large %.
It's a highly polled state, so maybe we did and we just came somewhat close to
hitting that 20% of the time where all the polling was inaccurate. Or maybe he
had Obama's chances too high.

Another way to judge a prediction model like this is to credit him with .794/1
for Virginia, rather that 1/1. For Florida he gets .503 if Obama wins, and
.497 if Romney wins. That doesn't capture it perfectly, either, but at least
it doesn't give him 0/1 in Florida if Romney wins even though he successfully
predicted that the state was super close.

~~~
DougBTX
> It's hard to quantify just how close he was.

It is hard to do with the final, "chance of winning" numbers, but it is much
easier with his share of the vote predictions, and you can check those
predictions for all states that way, even the ones where the winner was
certain.

Say Nate predicts that candidate A will get 80% of the votes in a particular
state, with a margin of error of 2%. Then, after the results are out, if that
candidate got 81% of the votes, then it was a good prediction, if they got 90%
of the votes, it was a bad prediction.

To quantify that, you want to compare the actual error with the predicted
error. There are better ways than this, but I'll make one up on the spot:
score = 1 / (actual error percentage points * predicted error percentage
points). In the first case, the score would be 1 / (1 * 2) = 0.5. In the
second, 1 / (10 * 2) = 0.05. The bigger the score, the better the prediction.
(This isn't a great model, since it rates a prediction with a large margin of
error which happened to be bang on higher than a roughly right prediction with
a narrow predicted error, but this is the general idea.)

~~~
furyofantares
Excellent point. In that case he only had one state outside his given margin
of error in West Virginia.

I didn't do any error analysis aside from asking if it was within his margin
of error, and all but one were. Hawaii was also exactly on the line.

<http://pastebin.com/0RB5GRjQ>

------
dkrich
I've got to say that I've been a cynical admirer of Silver since the last
election.

Why cynical? Because I began to notice throughout the Republican nomination
process that his observations varied wildly on which Republican candidate
presented great values on intrade. I think over time he gave a nod to every
candidate, save Michelle Bachmann, Tim Pawlenty, and John Huntsmann. From the
outset, Romney was the favorite to win, and not surprisingly, he won. But
throughout the nomination process where much less was known, many of his
projections proved to be wrong. So despite all of the contrarian opinion on
which candidates may pull it out against popular opinion (or perception) these
faded away until Romney was the clear favorite, Silver predicted him to win,
and eventually he carried the nomination.

Once in the Presidential election, Obama was the favorite. Incumbent
Presidents rarely lose, and Mitt Romney was not exactly a compelling
candidate.

All that said, he's clearly a brilliant dude, and I give him a lot of respect
for the accuracy of his predictions. I just haven't found them to be hugely
useful for any practical purposes. I guess people tend to be enamored by the
things they don't understand (statistical analysis), and I don't think there's
anything wrong with that.

~~~
josephlord
I'm not from or in the US but can I ask a slightly off-topic question?

Is Romney really the best the Republican's can offer? It seemed from here (UK)
that he was not a compelling candidate from the start but most the others in
the nomination process were massively flawed in some way or other and/or their
political views were just too far from the mainstream to really fly.

Romney might still have won if Obama had made a mess of something in the last
days before the election as a couple of percent swing would have changed
everything but it just seems like the Republicans should be able to do better.

Two questions:

1) Do the Republicans have people who would make compelling candidates in the
Presidential election? (Who?)

2) If not is there a structural problem in the Republican party that means
extreme views (or massive personal cash) is required to reach the position to
become a candidate and it somehow weeds out the compelling candidates?

~~~
nollidge
As a liberal American, my thesis is that the Republican party is extremely
fractured between fiscal conservatives (who are largely centrist or even
liberal on social issues) and the straight-up social conservatives. Which
means that finding a single individual who can coherently represent all of
them is basically impossible.

So enter Romney, who seemed willing to be mercurial - literally adopting
different positions in front of different audiences and scoffing whenever
anybody pointed it out. But that cynical strategy was necessary to have any
viability whatsoever in such a diverse party.

~~~
tatsuke95
> _"Which means that finding a single individual who can coherently represent
> all of them is basically impossible."_

And by the time the Republican nomination was over, the strength of the social
conservatives had dragged the rhetoric so far right that it was difficult for
Romney to get back to the centre for the election.

~~~
zer01
These two comments very completely and eloquently sum up the problem. The
divide between social and fiscal conservatism is something that gives the
republican party the biggest disadvantage.

~~~
nollidge
I should also mention that the big-L Libertarians, as instantiated in the
U.S., are _not_ the fiscally-conservative socially-liberal party that
centrists would even entertain. They're far too radical to be at all
politically viable (abolish Fed and Education, open borders, legalize drugs,
etc.).

------
ww520
Hard cold numbers won again.

The venom directed at Nate Silver before the election was astonishing. I guess
the pundits had sensed their careers of predication were numbered and fought
back hard, but Nate was proved right again. Kudos.

~~~
mkr-hn
There will always be a market for uninformed jabbering. Pundits wouldn't worry
if they looked at their own record.

~~~
DanBC
Is there a "pundit prediction tracker" anywhere? Somewhere that tracks the
predictions made by talking heads and shows how often they're right or wrong?

> uninformed jabbering.

And for informed jabbering. It's frustrating that analysis is being replaced
with opinion, especially when it's clear that opinion driven prediction is
garbage.

~~~
mkr-hn
I found a couple of things that look useful at a glance (I googled for pundit
prediction tracker):

<http://www.pundittracker.com/>

[http://nymag.com/daily/intel/2012/11/exhaustive-
collection-o...](http://nymag.com/daily/intel/2012/11/exhaustive-collection-
of-pundit-predictions.html)

I remember hearing about Pundit Tracker before it launched, but I didn't look
into it after that.

------
ComputerGuru
I've been a huge fan of Nate Silver since forever, and I must say, I really
regret his "acquirement" by NY Times. When it was his own website, it was a
much more user-friendly experience.

For instance, though I have an NYT account, every other page view I get
redirected to a login prompt if I don't press Esc fast enough (this page has
expired? WTF?). It also made him more of an independent, though he is
(falsely) discounted on many counts by other parts of media, his affiliation
with NYT is yet another facet for them to criticize.

~~~
Osmium
Disagree with this. NYT brought a much better user experience and much nicer
data presentation, and leant him a feeling of legitimacy too. There's a reason
the NYT has the reputation it does.

~~~
rm999
I disagree. Here's an archive from post-2008 election:
[http://web.archive.org/web/20081106113055/http://www.fivethi...](http://web.archive.org/web/20081106113055/http://www.fivethirtyeight.com/)

The old page contains more information yet looks friendlier. The nytimes style
is more streamlined, but it became too sanitized. The content became sanitized
too; Nate Silver could be more himself when it was his blog. He gave personal
accounts, he was more honest, etc.

~~~
untog
I think the answer here is that it was better for Nate to join NYT- I imagine
he was compensated well for the move. We can complain about it, but.. well, it
isn't in our hands.

------
a_bonobo
Doesn't that destroy the allegations of voting fraud in the US? (Not a US-
citizen here)

If there was widespread fraud with the voting machines as alleged elsewhere,
the voting outcome should have diverged widely from the predictions. The
predictions are so close to the actual outcome, and because from what I
understand the predictions are mostly based on polling, there couldn't have
been much fraud, or am I wrong?

Flippant edit: Assuming of course that Nate Silver wasn't in on the fraud and
didn't adjust his predictions accordingly.

~~~
mseebach2
Not on it's own, no. At most, is shows that fraud wasn't widespread enough (or
equally widespread on both sides) to skew the election away from the will of
the people.

Also, fraud is much more effective in smaller races - of which there are lot,
many of them too small to have as good poll coverage as the presidential one -
so if there is fraud, those races are probably where to look for it.

------
paulsutter
Nate Silver seems like a pretty smart guy, wouldn't it be better if he spent
his time doing something more productive? Predicting the outcome of an
election may have practical applications in gambling or for a hedge fund, and
yes he gets mad publicity and attention from women sure, but aren't there more
useful ways to apply statistics?

EDIT: Haldean, your point is excellent and cancels out mine absolutely. Good
thinking, you are entirely right. I would add that his attention and publicity
itself should increase the credibility of rational analysis in the news, and
that alone would be a great accomplishment.

~~~
gruseom
He has utterly discredited the horse race scam of big media political
coverage. That is a significant contribution, because such coverage has been
so influential in managing public opinion and how people think about politics.

He did it first and more dramatically in the 2008 primaries, but this time
feels more like the watershed.

It's not all Nate's doing, of course; this is a long term trend and many
people are working on it.

~~~
waterlesscloud
Now here's a bet I'm willing to take.

The horse race coverage of the next political race will be just as strong as
it was this time. This isn't the triumph of data over narrative, it's just
data showing it can be more predictive.

Narrative still has more mass popularity, and it always will.

~~~
gruseom
Hmm, I'm not sure how my point got turned into a claim that narrative will
henceforth have less mass popularity than data. I'm talking about a shift
among elites. Political media are going to be forced to adapt, in a direction
I think most people here would consider a good one. If they don't, they will
look ridiculous, which will weaken them even faster; appearing serious and
important is their trump card, after all.

But tell me how you propose to measure "just as strong" and I might take your
bet :)

------
programminggeek
<http://isnatesilverawitch.com/>

~~~
keithvan
Ahem, he would be a warlock.

~~~
programminggeek
He should have the right to self-identify as a witch if he wants. You have no
right to take that away from him.

~~~
TheGateKeeper
Self identification is a ridiculous concept if it's not based on fact. They
put people in the nut house for proclaiming they're a farm animal or Teddy
Roosevelt.

~~~
forensic
Transphobia in the software industry? Shocking

~~~
TheGateKeeper
Phobia? Hardly. You don't call an ftp client and ssh client do you? Likewise
you would and should never call a man a woman, unless you're tying to insult
him and vice versa. It's called logic and not some ridiculous concept of
accepting a mental disorder as something to be celebrated.

~~~
forensic
"it's called logic"

Actually, It's called not knowing what the fuck you're talking about

The logical thing to do would be to educate yourself.

P.S. trans people's brains objectively show non expected neural activation.
And this is just one of the many ways for people to understand trans. It's a
real thing and you need to EDUCATE YOURSELF instead of acting like bill
oreilly. Ignorant asshole.

~~~
TheGateKeeper
It's what's called a defect such as when a system doesn't perform as expected.
Sorry. No amount of emotionally supported bullshit is going to change facts.

------
brianchu
Keep in mind, however, that we could not have expected Nate Silver's own model
to predict 50/50 states. Given that Nate predicted that there was a 50.3%
chance that Florida would go to Obama, the only difference between this being
a story of predicting 50/50 states and this being a story of missing at least
one state is _a coin toss_. The reasons his model turned out to be 100%
accurate is sheer _luck_.

------
brownbat
Doesn't that mean he undercalibrated some of his predictions?

If you say 50 events should happen with a 60% probability, and they all
happen, shouldn't you have upped your confidence?

That said, since most his naysayers were saying he was OVERconfident at the
time, kudos to him for proving them wrong.

~~~
monk_the_dog
If the predictions are independent, then I agree with you. I can think of
reason why they would not be independent (trying to account for non-random
polling samples, for example).

~~~
3JPLW
They aren't independent. The developer of the 512 paths to the White House
lamented this in getting conditional probabilities into the path choices. The
simulations take into account the national popular vote polls and similar-
state demographics (if one state goes one way, it's more likely that a similar
one will, too).

Look at his EV histogram; 332 EVs had the highest probability of occurrence at
nearly 20%. While there are other ways to get to 332, I'd imagine most of that
percentage is from a map like last night. That's clearly not the partial
probability of each state multiplied together.

------
tmister
Nate's Prediction and actual results side by side
[http://twitter.com/cosentino/status/266042007758200832/photo...](http://twitter.com/cosentino/status/266042007758200832/photo/1)

------
Kurtz79
I think it's realatively easy, for a smart guy with a solid understandings of
statistics to come up with a reasonable model (weight polls by sample size,
come up with probability distribution for each state, run x Monte Carlo
simulations and take the average of the result...).

But a thing is coming up with a good model, another is to defend it publicly
putting your reputation on the line, while it would have been a much safer bet
simply to say 'it's too close to call', as many so-called experts did.

Full credit to Nate, he made math cool for a lot of people.

------
epaga
Even with an incredibly exact model, to actually get ALL 50 right really does
require a lot of luck in addition to skill. Even if he's 98% accurate for each
state (which seems amazingly high to me), that's still only a 1 in 3 chance to
get all 50 right.

~~~
shalmanese
He really only got 7/7 right. The other 43 states were not really in
contention.

------
lbrandy
Once California has its results in, I'm guessing his prediction on the popular
vote will make it 51/51.

~~~
btilly
He predicted that Obama would get 50.8% of the popular vote.

Currently Obama stands at 50.3% of the popular vote. However the majority of
the currently uncounted ballots are in Oregon and Washington. Therefore
Obama's share is likely to go up a smidge.

Still getting the popular vote to within 0.5% is pretty good. Especially
considering that many national polls going in were consistently calling for a
Romney win there.

------
Spooky23
It's a little much to claim that this guy called it. The minor detail of
actually conducting the polling was done by others.

~~~
Tuna-Fish
The point isn't that Silver is some superhuman guessing machine. The point is
that Silver explicitly did nothing particularly special over applying sound
statistics over publicly available data, came out with very different
predictions than the traditional media did, took a _lot_ of flak from them for
it, and turned out to be exactly correct in every meaningful way.

538 is a statement against the traditional election coverage, in effect saying
that math is better than pseudo-objective talking head bullshit. From his
blog:

 _Nevertheless, these arguments are potentially more intellectually coherent
than the ones that propose that the leader in the race is “too close to call.”
It isn’t. If the state polls are right, then Mr. Obama will win the Electoral
College. If you can’t acknowledge that after a day when Mr. Obama leads 19 out
of 20 swing-state polls, then you should abandon the pretense that your goal
is to inform rather than entertain the public._

[http://fivethirtyeight.blogs.nytimes.com/2012/11/03/nov-2-fo...](http://fivethirtyeight.blogs.nytimes.com/2012/11/03/nov-2-for-
romney-to-win-state-polls-must-be-statistically-biased/)

This is why the more math-literate portions of the internet have erupted with
cheers for Silver.

~~~
danielweber
_and turned out to be exactly correct in every meaningful way._

Silver himself wouldn't call his predictions correct.

He was not making thumbs-up v thumbs-down predictions. When he said "60%
chance of Obama winning this state," it means that 6 times out of 10 Obama
would win and 4 times out of 10 Romney would win. If Obama won all 10 times,
by his own account he would be wrong.

~~~
Wilduck
> it means that 6 times out of 10 Obama would win and 4 times out of 10 Romney
> would win. If Obama won all 10 times, by his own account he would be wrong.

That's a very frequentist perspective. I think the Bayesian interpretation is
a little more sensible. Given a prior estimate, updated with the information
we have, it is logical to assume that Obama has a better chance of winning.

That is, Nate Silver's 60% doesn't mean that if the election in a given state
were repeated, we'd see different results four times out of 10, but rather
that his information in making the prediction was incomplete.

~~~
danielweber
Nate Silver _himself_ said (in a radio interview that I cannot locate despite
much Googling :< ) that if he said something would happen 60% of the time and
it was 10 out of 10 that he his figure was wrong and should have said 100%.

~~~
Wilduck
But you can't run the same election 10 times. Without hearing the interview,
I'm going to guess he's talking about multiple states, all with 60% chance of
winning, which puts us in a different place entirely.

I guess I'm not disagreeing with the main point of your post. Just your use of
the phrase "something would happen 60% of the time", as being oddly
frequentist, when a Bayesian perspective is more appropriate here.

~~~
pseut
> But you can't run the same election 10 times.

That makes assessment difficult, but is irrelevant for interpretation. If
something happens 60% of the time, it happens with the same probability as
drawing a red ball from an urn with 6 red balls and 4 green balls, whether it
happens once or several times.

Trying to determine the quality of a model using some observed data is
inherently a frequentist exercise -- a dyed-in-the-wool Bayesian would take
the election results, use them to update his or her posterior distributions,
and carry on happily. (I know that no one would actually do this; no one who
analyzes data is really a "pure" Bayesian or a "pure" frequentist).

------
damian2000
I was looking at the Intrade prediction market a couple of days ago.. I think
they got everything correct too.

~~~
nikcub
Most markets got the winner right, which isn't a surprise, but they didn't get
the margin right. I found Obama to win 310-330 electoral college votes at $8
on Betfair yesterday (and I put it on, and it looks like I won).

The betting markets did believe that Obama would win, but there was a lot of
money on the race being closer and not as wide as Silver was predicting. This
is probably because the media narrative of the past two weeks was that the
race is close.

There is a rumor that some Republican donors were shortening the odds in the
margin betting in order to perpetuate the line that the race is closer. I
don't know how true that is, but it would be possible to move the market with
a few large bets.

~~~
the_cat_kittles
Where did you hear that rumor? I have all but concluded that on my own based
on the bizarrely low prices on obama shares on intrade over the past week. It
would be very interesting to hear something more than speculation.

~~~
nikcub
I was hanging out and talking to other people who are interested in betting
markets online, we were just speculating. We mostly talk about sports betting
but the election was such a big topic in betting that you couldn't avoid it
and I ended up getting sucked into it.

I should have looked into it further, this has since been posted:

<http://news.ycombinator.com/item?id=4756229>

I don't have much experience with Intrade, but there must be a way to dump
data from the site and investigate the orders and buys being made.

As that post says, putting down $1M to keep the odds more in line with a
narrative of the contest being close is small change in the context of
political spending today.

I had two people tell me that rumor. One heard it via friends on Wall St and
the other read it in another forum, IIRC.

There must be something to explain the spread between Intrade and Betfair,
that the market was being manipulated by those with a vested interest makes
sense.

------
k2enemy
I wonder if having such good predictions is ultimately harmful to the
democratic process? If I were only voting on the presidential election, I
would not have bothered going to the polls because my state was solidly on one
side and the chance of my vote being pivotal was basically zero. But some of
our local elections were less certain, so I went for those.

As the prediction models get better, get applied to local elections, and get
more publicity, I wonder what it will do to people's incentive to vote? I'm
much more likely to go out and cast my vote when the outcome is uncertain and
I think I have a chance of being the pivotal vote. But if super accurate
forecasts tell me my vote won't matter, then maybe I won't bother standing in
line for an hour.

~~~
pfortuny
When the incentive of the people changes, then the outcome will change as
well.

------
tlrobinson
I wonder how much money he could be (is?) making in prediction markets like
Intrade.

~~~
crntaylor
It's hard to make really significant money on InTrade. I don't think you could
make a living at it, for example - there's not enough liquidity.

There were some fun opportunities to arbitrage InTrade against BetFair this
election, if you could be bothered with the faff of setting up an account on
both of them. InTrade had Obama at 66-66% for a while, whereas BetFair had
Obama at 79-80%. You could buy on InTrade and hedge on BetFair, and make a
guaranteed profit (even after t-costs).

~~~
jessriedel
The presidential market on InTrade was quite liquid. $300k of volume with a
bid-ask spread of like 0.2%.

Do you know why there was such a persistent gap for arbitrage between InTrade
and BetFair? I don't know about BetFair's vig and rules, but InTrade has only
a couple of small fixed transaction costs. ($5/mo to have an account and $10
to pull your money out, I think.) Seems like it should have been eaten up.

------
mhartl
This remains to be seen. As of this writing, Florida and Virginia have yet to
be called.

~~~
arn
Here's VA
[http://electionresults.virginia.gov/resultsSW.aspx?type=PRE&...](http://electionresults.virginia.gov/resultsSW.aspx?type=PRE&map=CTY)
Reporting at 100%. 50%/48% in favor of Obama.

------
Codhisattva
Silver affirms that public opinion polling is accurate - as long as you know
how to read the data.

Not to diminish Silver's work - I'm just saying the data he uses needs to be
accurate in order for his models to be accurate.

------
monochromatic
Why do I have to enter my google profile information to open this?

~~~
evoxed
It's just a brief G+ post with two links, but if you want to see what it's
talking1bout just go here instead: <http://fivethirtyeight.blogs.nytimes.com/>

------
mamatta
quick image reference on Nate's predictions versus actual results:
<http://cl.ly/image/0744432U0S0L>

------
elbac
I'd like to see Nate run the numbers which would point to districts where
there were statistical anomalies on the lookout for election funny business.

------
quangv
I'd love for him to go to past elections and see how well our political voting
system works...

~~~
the_cat_kittles
You might like the economic sub discipline of "Social Choice Theory"

------
alexmr
I'd like to see the comparison for -90,-60,-30,-15 days from election too.

------
hayksaakian
Let's be honest though, the number of likely outcomes was low enough that
simply picking the correct one isn't too impressive.

Edit: I meant to say that the prediction itself was unspectacular, however I
respect the meticulous-ness of his process.

~~~
achompas
I don't know what you mean by this.

Are you referring to "Obama wins" as a potential coin flip? Because Silver's
prediction was more nuanced than that.

Are you talking about his predictions for the 50 states, which represented a
potential (albeit naive) 2^50 combinations? Because he called some very close
states, including a Florida election that he forecasted as an Obama victory by
_hundredths of a percent._

Or are you just trolling?

~~~
hayksaakian
The only highly uncertain states were Ohio, Virginia, and Florida. (Based on
the time before results started coming in). So really there were 9 safe
predictions, having picked one of them does not make the OP's story
spectacular.

~~~
btilly
You clearly were not following the polls very closely.

An average of the polls going into the election put 2 of the three that you
name as "highly uncertain" at likely polling within 3%. According to the
average of polling data at <http://www.electoral-
vote.com/evp2012/Pres/Maps/Nov06.html> all of Colorado, Iowa, Wisconsin, and
North Carolina should also be on your list of "highly uncertain states". So
now you have 128 potentially reasonable combinations by your criteria.

(Admittedly the one that happened is in the top 4 outcomes. But a naive
analysis would not have seen that Florida would be a photo finish while North
Carolina would not. However Nate's analysis clearly did see that.)

------
caycep
what is the technique he is using? is it something like a generalized linear
mixed model? (i.e. like using R's lme4 module)?

------
denzil_correa
Quite impressive. Prediction in any form is difficult; more so when there are
a million people involved.

~~~
moistgorilla
Actually, I believe when there are a million people involved it's actually
easier. As long as you have access to good data that is.

~~~
denzil_correa

        As long as you have access to good data that is.
    

This is very difficult in itself. Even once you have it; it is not easy. More
data does not lead to better prediction. It's a fallacy which has been proven
wrong time and again.

~~~
dbaupp
I hope you mean "having more data doesn't lead to better prediction
_automatically_ ", because, used correctly, it always gives more accurate
predictions (e.g. prediction intervals[1] are smaller, one can make better
determinations about the distribution of the underlying population, etc).

[1]: <https://en.wikipedia.org/wiki/Prediction_interval>

~~~
denzil_correa
Let me me correct my statement. More data does not _necessarily_ lead to
better prediction. I must add that it is not always about usage as well. In
some cases, the data is _insufficient_ to make any kind of prediction.

~~~
dbaupp
The only data that is insufficient to make a prediction is either no data at
all, or data suspected to be wrong (e.g. faulty observation equipment).

Even given a single sample, one can make a prediction of what the next
observation will be: the same value. Once you get more samples and more
knowledge about the subject, you can bring stronger statistical tools to bear
to get error estimates, prediction intervals, improved models, etc.

~~~
denzil_correa
> _Once you get more samples and more knowledge about the subject, you can
> bring stronger statistical tools to bear to get error estimates, prediction
> intervals, improved models, etc_

Like I said earlier, increase in data does not always lead to improved models.

~~~
pfortuny
If you have enough data, you have a census and then there is no model, just
reality...

In polls like this, the more data the better as long as it is unbiased (i.e.
as long as it is DATA).

------
maaku
Including Alaska and Hawai'i, with zero precincts reporting!

</sarcasm>

~~~
MartinCron
If he's wrong on AK and HI, I will eat my hat, a moose, and a rainbow.

~~~
jlgreco
While you're eating a rainbow, might I suggest challenging the horizon to a
race? ;)

~~~
mahmud
way ahead of you!

