A machine learning system was trained at Google as an attempt to do "scientific hiring." As their ground truth, the team used the performance reviews of people who had already been hired at Google. To optimistically give it the benefit of every doubt, this study says that Google put too much emphasis on success in programming competitions when it was making hiring decisions in the early 2000s.
To look at it more pessimistically, the possibility of correlations between the features cast doubt on that conclusion as well.
It's kind of unbelievable that anyone is even bothering to say _anything_ in this thread without addressing this point. Of course people sometimes overvalue competitions, and if it's true that Google did it, that would explain the effect whether competition winners are typically better or worse than other people who interview at Google.
Do I know it's true? Of course not. But it's such an obvious question to ask that your first question should be "how did measure/rule out that effect?". And if there's no answer, you go back to the drawing board and try to answer it.
EDIT: To be clear, Ruberik is NOT an impostor, and it's sad that someone is now going around flagging his comments.
No user flagged those comments. Some comments (e.g. by noob accounts from Tor IPs) are moderated by default because of past activity by trolls. These eventually get unflagged. We're going to open the unflagging part to the community soon, but that work isn't done yet.
Looks like his comments are un-flagged now.
Also, as others have pointed out, Peter Norvig mentions some of this in his talk.