

Netflix Prize - One Team's Account - mhb
http://www.spectrum.ieee.org/may09/8788

======
abossy
A great overview on ensemble techniques in machine learning (what BellKor and
many of the top competitors use):

<http://abeautifulwww.com/EnsembleLearning.pdf>

------
trickjarrett
A fascinating account, I always have found discussions about the teams chasing
this prize interesting.

Most interesting to me though was that it finally gave a clear description of
how the contest worked. I've never looked into getting involved with the
contest and the past articles I've read have rather glossed over the format
and the way in which they run tests and judge entrants.

------
moserware
Another good account:
[http://www.nytimes.com/2008/11/23/magazine/23Netflix-t.html?...](http://www.nytimes.com/2008/11/23/magazine/23Netflix-t.html?pagewanted=all)

------
socratees
Interesting article. As of 5/13/09 "BellKor in BigChaos" team in the article
have improved the algorithm by 9.71%.
(<http://www.netflixprize.com/leaderboard>)

~~~
socratees
Is the contest over, or is it an ongoing one?

~~~
lsb
No one's won the Grand Prize yet, for which they're still offering $1M.

It'd be fascinating to know how much each percentage point increases sales.

~~~
patio11
Accuracy of Netflix's recommendation algorithm is the wrong place to focus if
you'd want to increase sales:

1) It is only perceptible after someone is a customer.

2) It is not strongly perceptible (people suck at precise measurements -- that
is why we have computers tell us when the root mean squared error is 9.63%
instead of just looking at our slice of 4 movie recommendations and saying
"HOT DAMN that is better than last week -- you gave Saving Private Ryan 3.5
stars -- a strong improvement over your last rating of it at 3, which didn't
quite represent my interest.")

3) It only affects a portion of the customer base. Netflix is a service for
delivering movies, not for rating them. The feature is doubtless useful to
many and of intense interest to a few, but there are equally doubtlessly many
Netflix customers who don't even know it exists.

Now, there might be some bonus for having the _perception_ of having better
accuracy (or, for that matter, geek cred) from having the Netflix prize... but
that wouldn't be tied to the objective reality of whether the algorithm
actually improved or not.

It is likely that A/B testing on the call to action in the signup button would
moev the needle a lot more, for a lot less work. (Please don't stone me too
harshly.)

~~~
wheels
RMSE also penalizes for a bad prediction of your 1000th-from-the-top best
prediction as much as your top prediction.

Amazon's algorithm is actually quite simple (at least as much as can be
extrapolated from the outside, and confirmed by interviews with the people
that wrote it), but they do a lot of work to place recommendations in context.
I think that's one of the critical places to focus on when looking at raw
sales boosts.

~~~
aneesh
Yep, context is key. I recall reading that Amazon found placing the "This is
recommended because you bought book X" actually reassured their customers a
lot. It seems like people prefer recommendation algorithms they can
understand, rather than complex ones that may be more accurate.

~~~
flipbrad
Absolutely; makes me think back to this item
(<http://news.ycombinator.com/item?id=590276>) - the power of 'because';
whatever follows because only has to sound plausible in the blink of an eye
that will be spent assessing whether the user trusts/pays attention to what
amazon is telling him/her.

