Watch the actual video to get the real take away:
I think what it meant was that, everybody who gets hired at Google is pretty good. If I just had to pick someone off the street, I really want that contest winner, I got to take him every time. Or her. But if it's somebody who passed the hiring bar, then they are all pretty much at the same level. And maybe the ones used to winning the contest, they are really used to going really, really fast and cranking the answer out and moving on to the next thing. And you perform better on the job if you are a little more reflective. And go slowly and make sure you get things right. -- Peter Norvig.
Not well known, but relevant to a lot of the stuff about Google's hiring in particular (eg I wasted a lot of time explaining to people 2 years ago or so that Google's HR guy saying that intelligence didn't matter to their hiring decisions != intelligence doesn't matter to job performance (it does, as has been demonstrated many times), because Google was already selecting for <1% of the population.)
Perhaps he means it in the Carol Dweck sense, that how we view our failures is what determines our success. IQ tests only measure what we've learned already, not our attitude towards learning.
People getting in to final rounds of programming contests are usually not the best programmers in the world in terms of breadth, maturity and exposure to other aspects of programming. They typically have put enormous effort on super optimizing their mental structure to do nothing but learn to master every little trick necessary to win the contest. This typically means memorizing implementation of pretty much all popular solutions in these contests, mastering odd tricks while ignoring general insights, less attention on elegance and maintainability, less incentivized to work in group, more tuned to getting any solution as quickly as possible instead of striving to get best possible solution, ignoring any areas that are not popular in such contests etc. This kind of brain training obviously has its negative. However on average, winners from contests will usually out perform your average programmer any day, they just won't outperform our super stars who would probably disqualify well before semi-finals because they were busy in learning other things that are not useful at competitions.
My observations from dealing with one such person (I know, a great statistical sample size) very closely was that they are extremely fast learners and usually gravitate toward working on small very well defined problem that they already know how to solve but they suck at getting big picture, learning something large and complex and understanding the intricate interdependencies involved in complex systems.
I think your examples are great, if you just change deadlift 400 pounds to maybe ~500-600 pound, (or roughly 2.5-3x bodyweight for adult men) . Most adult men can reach a 400 pound deadlift in less than 2-3 years of lifting (that's <5h/week).
I mention this only because a lot of people here on HN are probably lacking physical activity, compared to what human were evolutionary selected for, and it's useful to know what can be expected. It's generally more than most people think.
1: See for example http://chrissalvato.com/2009/12/skill-guidelines-for-buildin...
A: "Software engineering". Most real-world programming doesn't involve any new algorithm, doesn't require any cleverness; it merely involves implementing some known solution and making sure it's robustly commented and tested. It requires self-discipline simply because it's so boring!
SnapDragon, who's current rating is 3005 on Topcoder, and was ranked 18 on that site said that, who by the way is employed at Google (at least he was when he gave that interview).
I think that pretty much sums it up. In addition, there is an aspect of software engineering that is much like playing with soap bubbles. All software eventually becomes outdated and unusable. If you're very lucky, some of the code you wrote will be used in future versions and maybe last for a decade or two before even that is completely discarded and forgotten.
So it is completely unclear that anyone would want to write production code for the entertainment value. And secondly, most real world programming is in fact pretty boring, and it is likely that solving puzzles is far more entertaining.
To draw an analogy, some smart people don't do well in school because you are graded on things like penmanship and mentioning the units and things like that. I don't think the scenario here is vastly different.
Granted, ICFP is a fair bit different than something like Google Code Jam. There are far fewer entries (typically around 200 or so for the main round), the task is larger, you have more time (3 days for the main round compared to 2.5 hours for a round of Code Jam) and you can compete as a team rather than an individual. I haven't really tried Code Jam yet so I can't say how much of that makes a difference.
A lot of the winning contestants in things like Topcoder are people from countries where it's lucrative and desirable to get a few hundred dollars for a day's worth of algorithm writing.
Indeed, what Norvig might be missing, is that the more methodical folks might also be contest winners but they chose not to highlight that because they don't think it's worth talking about compared to their other accomplishments.
You can only put so much on a resume.
programming contest winners are used to cranking solutions out fast and that you performed better at the job if you were more reflective and went slowly and made sure things were right.
The idea that a programmer can be very good at one level and have that goodness fail at other levels is important to consider. It is important to consider that a combination of skills, from cranking code to considering the impact of that code, matter. This consideration can lead to an entire team being valued rather than, say, the glorification of the "rock star" programmer.
Meaning, the usual academic problem where you are given the inputs, and only the inputs you need, and a desired solution, doesn't actually match up that well with real world problems.
The other thing I've seen is a _lot_ of formally trained engineers are motivated by puzzles. Once they've solved it, all the other things that apply to the real world, like deadlines, shipping projects, making customers happy, and making money for the company are not the least amount interesting to them.
The author of this article appears to think that he or she has provided evidence for the sentence you quoted, but a number of comments in this thread have pointed out that isn't what the evidence says; so the quoted sentence is simply speculation.
A machine learning system was trained at Google as an attempt to do "scientific hiring." As their ground truth, the team used the performance reviews of people who had already been hired at Google. To optimistically give it the benefit of every doubt, this study says that Google put too much emphasis on success in programming competitions when it was making hiring decisions in the early 2000s.
To look at it more pessimistically, the possibility of correlations between the features cast doubt on that conclusion as well.
It's kind of unbelievable that anyone is even bothering to say _anything_ in this thread without addressing this point. Of course people sometimes overvalue competitions, and if it's true that Google did it, that would explain the effect whether competition winners are typically better or worse than other people who interview at Google.
Do I know it's true? Of course not. But it's such an obvious question to ask that your first question should be "how did measure/rule out that effect?". And if there's no answer, you go back to the drawing board and try to answer it.
EDIT: To be clear, Ruberik is NOT an impostor, and it's sad that someone is now going around flagging his comments.
No user flagged those comments. Some comments (e.g. by noob accounts from Tor IPs) are moderated by default because of past activity by trolls. These eventually get unflagged. We're going to open the unflagging part to the community soon, but that work isn't done yet.
Looks like his comments are un-flagged now.
Also, as others have pointed out, Peter Norvig mentions some of this in his talk.
I mean, job interviews are very competitive, they are time constrained, and they result in a binary decision made by some external authority, just like competitions.
For a marathon run exactly this behavior is negative, that makes you win a sprint run. When you start fast and are eager to have fast successes, you are likely to loose the marathon.
In my experience, software development (in larger scale) needs a lot patience, planning and great staying power.
Interviews are 30min of discussion and 20 lines of code.
As a consequence, among those who do get the job, those who were good at competitions have a correlation to not being quite as good at the day-to-day, since there are some in that category who would not have got the job without the competition experience.
Yet my code is decent, maintainable, performs well, and is said by those who review it to be high quality (including well respected and competent team leads who are also hands-on coders and architects of large systems).
And I have made, just my own contribution though it's in a context where other people also contribute, probably hundreds of millions of dollars for companies I have worked for. By creating creative, effective solutions to real-world problems.
In some cases my not being hung up on algorithms like some colleagues has led to me see solutions that they were blind to. Not better algorithms, but completely out of the box solutions. And these solutions have worked well, and made it into products used daily by hundreds of millions of people, with price markups attributable directly to the solution.
All this is to say that from the admittedly biased perspective of a non-competition winner, my experience tends to validate the implied corollary of this headline.
All of these claims from Google that say competition performance hurts or that GPA doesn't matter are missing one huge thing: selection bias.
Google only sees the performance of the employees that it hires, not the performance of the employees that it doesn't hire. Because of this, the data they analyze is statistically biased: all data is conditioned on being employed by Google. So when Google says things like "GPA is not correlated with job performance" what you should hear is "Given that you were hired by Google, GPA is not correlated with job performance."
In general, when you have some thresholding selection, it will cause artificial negative correlations to show up. Here's a very simple example that I hope illustrates the point: Imagine a world where high school students take only two classes, English and Math, and they receive one of two grades, A or B. Now imagine a college that admits students with at least one A (AB, BA, or AA) and that rejects everyone without an A (BB). Now imagine that there is absolutely zero correlation between Math and English - performance on one is totally independent of the other. However, when the college looks at their data, they will nonetheless see a stark anticorrelation between Math and English grades (because everyone who has a B in one subject always has an A in the other subject, simply because all the BBs are missing from their dataset).
The bottom line is that whenever you have some score that is some positive combination of input variables and then you threshold your observed data on a minimum total score (as is the case in hiring at Google or in college admissions), then you will see much stronger negative correlations between your inputs than exists in real life.
And really, whenever you run some selection algorithm, you should hope that (on the margin) there are no correlations between your selection decision and your inputs. If there still is a correlation post-selection, that means your algorithm has left money on the table. So when Google says that programming competitions are negatively correlated with performance and GPA is uncorrelated with performance, what that likely means is that Google's hiring overvalues programming competitions and fairly values GPA.
In fact, if we did a randomized controlled study (the gold standard of proving causation), I think we'd see the expected results. Just imagine - if you grabbed two random programmers from the entire population, one who had won a competition and one who had not, do you really think the competition winner would be the inferior programmer?
Edit: Many other posts here are coming up with plausible sounding narratives to fit the data. "Competition winners are one-trick ponies or loners or write awful code." I encourage everyone to think critically about the data analysis itself.
Edit2: From Gwern's comment, this phenomenon is apparently called Berkson's paradox: http://en.wikipedia.org/wiki/Berkson%27s_paradox
It is very hard to account for that in interviewing. Probably the only effective method would be to ding people with competition experience. Of course, that would be obvious conscious bias and would never happen.
I also don't believe that the Google hiring committees treat competition winners specially (though of course I could be wrong there). The reason I say that is as a frequent Google interviewer, the interview scores vastly outweigh the value of the resume (to the point of it becoming ignorable). And, I personally don't look at a resume except to see what language a candidate likes to write in, and to see how long they've been out of school (depending on how long, I might ask more a designy question).
Given my anecdotal and inferred evidence, I believe the only way for a competition win to help someone get the job is to make them better at interviews.
Given the stated evidence in the article, combined with my immediately-above belief, the explanation that fits is that competitions help more with interviewing than with job performance.
Since it helps more with selection (passing the interview) than with performance (job review), there is a natural negative correlation with performance for those who were selected.
This is completely consistent with competitions helping people do better with job performance.
For someone who only programmed for competitions and never worked on a serious project I could see it being a negative, but with some experience the competition winner should (on average) do better than average.
There are a lot of possible mechanisms by which the correlation might've been produced, if it's valid. One is that winning programming competitions is anti-predictive of job performance (the interpretation this summary takes). But another is that Google puts (or previously put) too much positive weight on winning programming competitions in their hiring vs. other factors. If Google were, for example, more willing to overlook other weaknesses in people who had won programming competitions, or treated them more leniently in interviews, that would be another mechanism for producing a population of hires where those who won programming competitions were worse at their jobs.
A 30 minute interview question is more like a programming contest question than a serious project.
It sounds his conclusion is measuring a defect in their hiring process more than "programming contest winners tend to make worse employees".
Contest winners are pretty good employees, but the best employees usually aren't contest winners. Cranking out solutions fast is good for winning contests but not as good for writing great code. If you can win a contest, writing code quickly is probably a pretty big part of your skillset which you'll probably use in your job too, and that skill is negatively correlated with writing excellent code.
Still _good_ employees, just not the best.
In a similar vein, students graduating with 4.0 GPAs are often not the best employees because quite often dedication to grades means neglecting the social growth and other non-graded activities, learning, and experience available at universities.
"Won programming contest and was hired by Google" is not the same sample set as "won a programming contest". Once you add "and was hired by Google", all sorts of sampling bias is added.
To make it a really valid comparison, you'd need to track "won a programming contest and was not hired by Google" and "won a programming contest and never applied to Google".
I think knowing both modes can be useful, and knowing only one can be limiting, no matter which one it is.
Based on my (admitted small) social circle, this means they excel at algo trading firms. A lot of trading teams I know of are headed by former Putnam/ICPC/IOI/IMO/etc. top rankers. The politics there are simple: did you make money?
Incidentally, much of human intelligence and talent in many areas manifests in understanding what to do; it's what machine intelligence is worst at right now; and it's the thing without which machines often beat people (board games are a simple example.)
So there are downsides to "simple politics" or rather simple cost functions; I for one don't care about such problems very much.
I'll speculate wildly that the loners who perform well at these tasks don't have the leadership/teamwork skills that are so highly desirable.
I think tedsanders has the right answer.
It disagrees with every other data point I have, so I'm very skeptical with both the methodology (which is opaque) and the conclusion.
From all of my experience at Kaggle (we run machine learning competitions), with our community, and from being close to programming competition sites & understanding their communities, doing great at competitions is an unambiguously positive signal.
(It's worth noting that doing great at competitions is only a positive signal - the lack of competitions is by no means a negative signal).
Many of our customers have found that their best hires have come from competitions. In a lot of cases, this surfaces candidates that would normally be completely overlooked because they don't fit the "top tier CS school" mold that recruiters commonly overfit to.
Several companies have had a successful recruiting strategy built on poaching our top users (https://www.kaggle.com/users).
Peter Norvig's criticism that "programming contest winners are used to cranking solutions out fast and that you performed better at the job if you were more reflective and went slowly and made sure things were right" is specific to programming competitions with very short time durations (vs. the machine learning competitions that I'm used to running, which typically last months and incentivize solutions that generalize well).
However, we've seen that many programming competition winners also do well on machine learning competitions, and the same qualities that aid in competitive programming (creativity, efficiency, tenacity, fluidity with tools, and the ability to build something that works) help win machine learning competitions.
I was somewhat obsessed with programming competitions as a teenager, and the first time I made it to a national-level competition I was shocked to discover that most of the people at that level were more "math people" than "coding people". Most of them had no interest in building real software. (This is not to slight them; they were also way smarter than I was.)
One of the reasons I stopped doing competitions is that I realized that, once you have your bag of tricks (especially things like dynamic programming, network flow, and the usual set of algorithms), programming competitions in the style of the IOI have nothing to do with programming.
However, I bet winners of contests like the IOCCC or the ICFP programming contest would be strong programmers with good on-the-job performance, though. (Or even winners of voluntary programming "contests" like Ludum Dare.)
I imagine people that win this type of competitions capable to work obsessively on what they like, but also to be prone to ignore other things they find less challenging.
Target applications, time limits, thinking, documentation, help usage differs in both conditions and better results for business can not be accomplished with artificial/inorganic restrictions.
The title is misleading. Being a winner of a programming competition is very different from being good at programming competitions.
I can't see how being talented at this would have any connection to talent at building a company, in fact I wouldn't be surprised if it was negatively correlated. I'm not really sure talent at winning programming competitions could be positively linked to other than talent at winning programming competitions. Valuable industry tasks differ substantially from what is done in a programming competition.
Pretending like people can't think in more than one mindset is an inaccurate model. I've never done an honest-to-goodness coding competition, but I can imagine someone who does code competitively to think very differently when they're competing vs. when they're designing/implementing a solution as a professional endeavor.
Someone would have to explain to me in much more detail how this "negative correlation" is established before I believe it.
Why must interviews only be about exactly what you're being hired to do?
Perhaps you're not as good as you think.
Less than half of a software engineer's daily work is actually coding.
It may not do these better than other tests, but basically any "solve this in x minutes" questions are going to reveal behavior in a person.
But if I can do the same creative thinking faster than you can, why would anyone bother waiting for you?
I think it's just a statistical correlation they found in the hiring data.
Basically, like I wrote, I would need to know more.