Hacker News new | past | comments | ask | show | jobs | submit login
Programming competitions correlate negatively with being good on the job (catonmat.net)
205 points by luu on April 5, 2015 | hide | past | favorite | 105 comments

It's still a positive predictor in general. It just means that, beyond a certain point, it's a negative predictor. Just like a GPA of 3.5+ is probably positively correlated with getting tenure, but maybe getting 4.0 is worse than getting 3.9.

Watch the actual video to get the real take away:

I think what it meant was that, everybody who gets hired at Google is pretty good. If I just had to pick someone off the street, I really want that contest winner, I got to take him every time. Or her. But if it's somebody who passed the hiring bar, then they are all pretty much at the same level. And maybe the ones used to winning the contest, they are really used to going really, really fast and cranking the answer out and moving on to the next thing. And you perform better on the job if you are a little more reflective. And go slowly and make sure you get things right. -- Peter Norvig.

Yeah, this might be an instance of http://en.wikipedia.org/wiki/Berkson%27s_paradox - correlations after selection/conditioning may be different than before.

Not well known, but relevant to a lot of the stuff about Google's hiring in particular (eg I wasted a lot of time explaining to people 2 years ago or so that Google's HR guy saying that intelligence didn't matter to their hiring decisions != intelligence doesn't matter to job performance (it does, as has been demonstrated many times), because Google was already selecting for <1% of the population.)

Intelligence SHOULD have mattered when that HR guy was hired.

Perhaps he means it in the Carol Dweck sense, that how we view our failures is what determines our success. IQ tests only measure what we've learned already, not our attitude towards learning.

No, you're missing the whole point. Intelligence doesn't matter because everyone is smart who applies to Google.

Being a champion at something requires excruciating narrow focus on something for unusually long time. If you are getting GPA of 4.0 or play Rachmaninoff's Piano Concerto No 3 or deadlift 400 pounds or in top 1000 chess players - you probably have to work on it for hours a day for years while ignoring everything else (of course unless you are one of those 1 in million polymath).

People getting in to final rounds of programming contests are usually not the best programmers in the world in terms of breadth, maturity and exposure to other aspects of programming. They typically have put enormous effort on super optimizing their mental structure to do nothing but learn to master every little trick necessary to win the contest. This typically means memorizing implementation of pretty much all popular solutions in these contests, mastering odd tricks while ignoring general insights, less attention on elegance and maintainability, less incentivized to work in group, more tuned to getting any solution as quickly as possible instead of striving to get best possible solution, ignoring any areas that are not popular in such contests etc. This kind of brain training obviously has its negative. However on average, winners from contests will usually out perform your average programmer any day, they just won't outperform our super stars who would probably disqualify well before semi-finals because they were busy in learning other things that are not useful at competitions.

My observations from dealing with one such person (I know, a great statistical sample size) very closely was that they are extremely fast learners and usually gravitate toward working on small very well defined problem that they already know how to solve but they suck at getting big picture, learning something large and complex and understanding the intricate interdependencies involved in complex systems.

In general these things tend to follow a logistic curve. It's a useful meta-skill to know when investing more time has marginal utility.

I think your examples are great, if you just change deadlift 400 pounds to maybe ~500-600 pound, (or roughly 2.5-3x bodyweight for adult men) [1]. Most adult men can reach a 400 pound deadlift in less than 2-3 years of lifting (that's <5h/week).

I mention this only because a lot of people here on HN are probably lacking physical activity, compared to what human were evolutionary selected for, and it's useful to know what can be expected. It's generally more than most people think.

1: See for example http://chrissalvato.com/2009/12/skill-guidelines-for-buildin...

You know, interestingly and according to my own personal anecdote, I feel significantly more efficient and productive in programming after I started weightlifting every day and using a standing desk at work. I'd guess between 15% and 25% more effective as a programmer. What I can figure is that I began to challenge more of the world around me (by lifting every day and running 10k or more on the weekend), and the same determination I use to motivate this activity has "bled over" into my programming. This is all 100% anecdote and purely unscientific, but it's how I feel after at least a couple of on-again-off-again exercise periods in my life, culminating finally in what seems to be a more permanent pattern of physical fitness. I really believe all humans should engage in weightlifting (and not simply a bit of cardio).

I agree. I was actually pretty excited he said 400 lb. deadlift. I'm in the high 300's and felt accomplished for a second. But you're correct, the 500-600-700 range is where the big kids play, and it takes years and years to get there.

""" Q: What do you hate about coding?

A: "Software engineering". Most real-world programming doesn't involve any new algorithm, doesn't require any cleverness; it merely involves implementing some known solution and making sure it's robustly commented and tested. It requires self-discipline simply because it's so boring! """ [1]

SnapDragon, who's current rating is 3005 on Topcoder, and was ranked 18 on that site said that, who by the way is employed at Google (at least he was when he gave that interview).

I think that pretty much sums it up. In addition, there is an aspect of software engineering that is much like playing with soap bubbles. All software eventually becomes outdated and unusable. If you're very lucky, some of the code you wrote will be used in future versions and maybe last for a decade or two before even that is completely discarded and forgotten.

So it is completely unclear that anyone would want to write production code for the entertainment value. And secondly, most real world programming is in fact pretty boring, and it is likely that solving puzzles is far more entertaining.

To draw an analogy, some smart people don't do well in school because you are graded on things like penmanship and mentioning the units and things like that. I don't think the scenario here is vastly different.

[1] http://community.topcoder.com/tc?module=Static&d1=features&d...

My personal experience with the ICFP programming competition suggests that you can do quite well with a fairly limited amount of time dedicated to it. My 2-man team managed to get 2nd place in the lightning round and 8th in the main round last year, despite using my side project programming language with its half-baked implementation. We generally do not do much preparation outside of having participated in the contest for a number of years. I did a bit of library work and my partner did a bit of a refresher on the language since he doesn't use it outside of the contest, but that was about it. Prior years were similar.

Granted, ICFP is a fair bit different than something like Google Code Jam. There are far fewer entries (typically around 200 or so for the main round), the task is larger, you have more time (3 days for the main round compared to 2.5 hours for a round of Code Jam) and you can compete as a team rather than an individual. I haven't really tried Code Jam yet so I can't say how much of that makes a difference.

Could boredom be a factor? Most programming jobs aren't very challenging, we just do the same thing over and over again with little variation.

I doubt it. Most programming jobs are repetitive, but they still consume an average of 8-10 hours of each weekday.

A lot of the winning contestants in things like Topcoder are people from countries where it's lucrative and desirable to get a few hundred dollars for a day's worth of algorithm writing.

Yes the real word tends to value thing like your phone bill being absolutely correct and auditable and not prone to bizarre edge cases that the completion winners don't care about

Your point is well taken, but with some training most healthy men can deadlift 400 pounds.

So, what he's saying, if you have nothing else that shines on your resume - highlight your programming contest wins. After you have some more stuff under your belt (patents, project wins), than stop emphasizing the contests.

Indeed, what Norvig might be missing, is that the more methodical folks might also be contest winners but they chose not to highlight that because they don't think it's worth talking about compared to their other accomplishments.

You can only put so much on a resume.

I don't think that should take away from the argument of the article, however:

programming contest winners are used to cranking solutions out fast and that you performed better at the job if you were more reflective and went slowly and made sure things were right.

The idea that a programmer can be very good at one level and have that goodness fail at other levels is important to consider. It is important to consider that a combination of skills, from cranking code to considering the impact of that code, matter. This consideration can lead to an entire team being valued rather than, say, the glorification of the "rock star" programmer.

One of the things I noticed in school and later working is a lot of people excel when the problem is spoon fed to them. When the constraints and and desired answer are very mechanical. As it gets fuzzier, a lot of people even 'smart' people start having real issues. And you see the usual cognitive blockers kick in. 'frustration, fear, anger'

Meaning, the usual academic problem where you are given the inputs, and only the inputs you need, and a desired solution, doesn't actually match up that well with real world problems.

The other thing I've seen is a _lot_ of formally trained engineers are motivated by puzzles. Once they've solved it, all the other things that apply to the real world, like deadlines, shipping projects, making customers happy, and making money for the company are not the least amount interesting to them.

I think it's fair to say that programming contest winners are exceptionally good at cranking solutions out fast; but they could be just as good at working reflectively in a team as everyone else, or even better.

The author of this article appears to think that he or she has provided evidence for the sentence you quoted, but a number of comments in this thread have pointed out that isn't what the evidence says; so the quoted sentence is simply speculation.

The conclusion this article offers is drawn from a badly flawed study from over a decade ago, which Google decided neither to publish nor to use the conclusions from.

A machine learning system was trained at Google as an attempt to do "scientific hiring." As their ground truth, the team used the performance reviews of people who had already been hired at Google. To optimistically give it the benefit of every doubt, this study says that Google put too much emphasis on success in programming competitions when it was making hiring decisions in the early 2000s.

To look at it more pessimistically, the possibility of correlations between the features cast doubt on that conclusion as well.

"To optimistically give it the benefit of every doubt, this study says that Google put too much emphasis on success in programming competitions when it was making hiring decisions in the early 2000s."

It's kind of unbelievable that anyone is even bothering to say _anything_ in this thread without addressing this point. Of course people sometimes overvalue competitions, and if it's true that Google did it, that would explain the effect whether competition winners are typically better or worse than other people who interview at Google.

Do I know it's true? Of course not. But it's such an obvious question to ask that your first question should be "how did measure/rule out that effect?". And if there's no answer, you go back to the drawing board and try to answer it.


Ruberik ran Google Code Jam for several years. (Or maybe someone registered his handle on HN just to forge this comment, but that seems like a silly thing to do.)

EDIT: To be clear, Ruberik is NOT an impostor, and it's sad that someone is now going around flagging his comments.

> it's sad that someone is now going around flagging his comments.

No user flagged those comments. Some comments (e.g. by noob accounts from Tor IPs) are moderated by default because of past activity by trolls. These eventually get unflagged. We're going to open the unflagging part to the community soon, but that work isn't done yet.

I see. Thanks for the clarification. (Seems as usual, I underestimated the complexity of HN's algorithms...)

Looks like his comments are un-flagged now.

I don't have any evidence of this other than my word, sorry. As kentonv points out, I was in a position to know: in the early days of Code Jam, we to do some convincing of Googlers who had seen the "programming contestants don't perform well" headline.

Also, as others have pointed out, Peter Norvig mentions some of this in his talk.

Since job interviews and competitions are somewhat similar, I wonder if this also applies to job interviews.

I mean, job interviews are very competitive, they are time constrained, and they result in a binary decision made by some external authority, just like competitions.

You can draw the relation to some extent. When I conduct interviews for software developers it's a 50/50 split between technical acumen and cultural fit. I've often brought on people who flunked the technical part of the interview just because they have good energy and a likable personality.

Interviews are incomparably easier than good programming competitions.

I think, software development is most often more like a marathon run and less like a sprint run.

For a marathon run exactly this behavior is negative, that makes you win a sprint run. When you start fast and are eager to have fast successes, you are likely to loose the marathon.

In my experience, software development (in larger scale) needs a lot patience, planning and great staying power.

Coming up with clever solutions is indeed very different from coming up with maintainable solutions.

... and many of the programmers I find in the corporate world do neither. But that is probably for another discussion.

Why code when you can go full dilbert on corporate politics.

There's a lot to gain by respecting your coworkers and developing the communication skills required to tactfully ask about their work.

Google keeps proving itself as a king of hyping. Their employees periodically make sound statement regarding to winning programming competitions as not being a good indication of effective employees but they only consider hiring those who can crack programming competition questions during their short interview session. Good for you if you want to judge a fish by its ability to climb a tree.

Google interviews are quite different from programming contests. In a programming contest, there is little time to think or talk, success is about instantly pattern matching problem to solution , copy pasting library Code, and pounding out a solution.

Interviews are 30min of discussion and 20 lines of code.

What's going on is that practicing for programming competitions (which is needed to perform well in that venue) gives you a leg up on Google-style interviewing skills, but not a leg up on Google-style day-to-day demands.

As a consequence, among those who do get the job, those who were good at competitions have a correlation to not being quite as good at the day-to-day, since there are some in that category who would not have got the job without the competition experience.

I'm someone who would probably suck at programming competitions. I also suck at office politics.

Yet my code is decent, maintainable, performs well, and is said by those who review it to be high quality (including well respected and competent team leads who are also hands-on coders and architects of large systems).

And I have made, just my own contribution though it's in a context where other people also contribute, probably hundreds of millions of dollars for companies I have worked for. By creating creative, effective solutions to real-world problems.

In some cases my not being hung up on algorithms like some colleagues has led to me see solutions that they were blind to. Not better algorithms, but completely out of the box solutions. And these solutions have worked well, and made it into products used daily by hundreds of millions of people, with price markups attributable directly to the solution.

All this is to say that from the admittedly biased perspective of a non-competition winner, my experience tends to validate the implied corollary of this headline.

Not to be gratuitously negative, but...

All of these claims from Google that say competition performance hurts or that GPA doesn't matter are missing one huge thing: selection bias.

Google only sees the performance of the employees that it hires, not the performance of the employees that it doesn't hire. Because of this, the data they analyze is statistically biased: all data is conditioned on being employed by Google. So when Google says things like "GPA is not correlated with job performance" what you should hear is "Given that you were hired by Google, GPA is not correlated with job performance."

In general, when you have some thresholding selection, it will cause artificial negative correlations to show up. Here's a very simple example that I hope illustrates the point: Imagine a world where high school students take only two classes, English and Math, and they receive one of two grades, A or B. Now imagine a college that admits students with at least one A (AB, BA, or AA) and that rejects everyone without an A (BB). Now imagine that there is absolutely zero correlation between Math and English - performance on one is totally independent of the other. However, when the college looks at their data, they will nonetheless see a stark anticorrelation between Math and English grades (because everyone who has a B in one subject always has an A in the other subject, simply because all the BBs are missing from their dataset).

The bottom line is that whenever you have some score that is some positive combination of input variables and then you threshold your observed data on a minimum total score (as is the case in hiring at Google or in college admissions), then you will see much stronger negative correlations between your inputs than exists in real life.

And really, whenever you run some selection algorithm, you should hope that (on the margin) there are no correlations between your selection decision and your inputs. If there still is a correlation post-selection, that means your algorithm has left money on the table. So when Google says that programming competitions are negatively correlated with performance and GPA is uncorrelated with performance, what that likely means is that Google's hiring overvalues programming competitions and fairly values GPA.

In fact, if we did a randomized controlled study (the gold standard of proving causation), I think we'd see the expected results. Just imagine - if you grabbed two random programmers from the entire population, one who had won a competition and one who had not, do you really think the competition winner would be the inferior programmer?

Edit: Many other posts here are coming up with plausible sounding narratives to fit the data. "Competition winners are one-trick ponies or loners or write awful code." I encourage everyone to think critically about the data analysis itself.

Edit2: From Gwern's comment, this phenomenon is apparently called Berkson's paradox: http://en.wikipedia.org/wiki/Berkson%27s_paradox

Great post. You are exactly right - all this study uncovers is that Google put too much bias in favour of programming competition wins in their hiring process in the past. What would be nice to know is exactly how important this factor was.

As I mentioned in another post, I don't think it's that Google sees programming competition success and says "oh boy, we gotta get this one!", but that the experience of those competitions helps the candidates perform better in interviews.

It is very hard to account for that in interviewing. Probably the only effective method would be to ding people with competition experience. Of course, that would be obvious conscious bias and would never happen.

I don't think programming competition wins (PWG) are a negative when selecting candidates out of the potential pool, but Google must have been over-weighting the value of PWG when selecting their candidates. Basically the candidates they hired with PWG must have been weaker overall than the other people they hired. To put it another way PWG might explain 5% of the variations between hires, but if it was given a weighting of 20% then we would see the negative correlation observed.

I also don't think that winning competitions is a negative. I only believe that the experience doing so gives one a bigger advantage in interviewing that in actual work performance.

I also don't believe that the Google hiring committees treat competition winners specially (though of course I could be wrong there). The reason I say that is as a frequent Google interviewer, the interview scores vastly outweigh the value of the resume (to the point of it becoming ignorable). And, I personally don't look at a resume except to see what language a candidate likes to write in, and to see how long they've been out of school (depending on how long, I might ask more a designy question).

Given my anecdotal and inferred evidence, I believe the only way for a competition win to help someone get the job is to make them better at interviews.

Given the stated evidence in the article, combined with my immediately-above belief, the explanation that fits is that competitions help more with interviewing than with job performance.

Since it helps more with selection (passing the interview) than with performance (job review), there is a natural negative correlation with performance for those who were selected.

This is completely consistent with competitions helping people do better with job performance.

You nailed it. This is the comment I was going to try to write if I couldn't find it here.

I don't see how it would be negative correlation or even zero. Winning a competition shows a certain amount of raw talent.

For someone who only programmed for competitions and never worked on a serious project I could see it being a negative, but with some experience the competition winner should (on average) do better than average.

This is looking at a correlation among people who were hired by Google, which is a fairly specific population, and one that's actually in part selected by the same variable being studied, which complicates things further.

There are a lot of possible mechanisms by which the correlation might've been produced, if it's valid. One is that winning programming competitions is anti-predictive of job performance (the interpretation this summary takes). But another is that Google puts (or previously put) too much positive weight on winning programming competitions in their hiring vs. other factors. If Google were, for example, more willing to overlook other weaknesses in people who had won programming competitions, or treated them more leniently in interviews, that would be another mechanism for producing a population of hires where those who won programming competitions were worse at their jobs.

I see what you're saying. The programming contest winner will do better on the algorithm brainteaser questions, so is more likely to get hired, even if he's lacking the ability to do better after he's hired.

A 30 minute interview question is more like a programming contest question than a serious project.

It sounds his conclusion is measuring a defect in their hiring process more than "programming contest winners tend to make worse employees".

Programming contests are winner take all. Contemporary professional practice emphasizes collective success, teamwork, and group ownership of code. Two processes that select for opposite traits will often have negative correlation.

If you watch the video linked in the post it makes more sense.

Contest winners are pretty good employees, but the best employees usually aren't contest winners. Cranking out solutions fast is good for winning contests but not as good for writing great code. If you can win a contest, writing code quickly is probably a pretty big part of your skillset which you'll probably use in your job too, and that skill is negatively correlated with writing excellent code.

Still _good_ employees, just not the best.

In a similar vein, students graduating with 4.0 GPAs are often not the best employees because quite often dedication to grades means neglecting the social growth and other non-graded activities, learning, and experience available at universities.

Well imagine comparing person A who worked on a serious project and won 10 programming competitions vs person B who worked on 11 serious projects.

Supposedly, to draw a correlation you should try to hold everything else constant, otherwise you don't know what's actually correlating.

That actually was the best point in the whole thread.

"Won programming contest and was hired by Google" is not the same sample set as "won a programming contest". Once you add "and was hired by Google", all sorts of sampling bias is added.

To make it a really valid comparison, you'd need to track "won a programming contest and was not hired by Google" and "won a programming contest and never applied to Google".

That's very interesting, especially as someone who did pretty well on those competitions. I don't doubt the finding, but I do think that it possible to put your mind in different modes. I frequently think about "running" vs "going slow" when solving problems at work. I use the former for whipping up a single use script and sketching solutions, and the latter for permanent solution, especially in hard domains.

I think knowing both modes can be useful, and knowing only one can be limiting, no matter which one it is.

Another way to see it is that there is a world of difference between writing a program that has to work once for a competition and writing a program that has to work and be maintained for years.

Considering that playing office politics is a key part of being "good on the job", I believe the sensationalized title. The top programming contest folk I knew were as socially awkward as they were brilliant.

Based on my (admitted small) social circle, this means they excel at algo trading firms. A lot of trading teams I know of are headed by former Putnam/ICPC/IOI/IMO/etc. top rankers. The politics there are simple: did you make money?

A different angle - from someone averse to competitions and trading of most kinds: both have simple cost functions, unlike most of the rest of life. Yeah, it will tend to simplify politics which I think is a good thing. But in other areas a large part of one's effort goes into figuring out what to do - what the cost function actually is, into expanding the understanding of what it is.

Incidentally, much of human intelligence and talent in many areas manifests in understanding what to do; it's what machine intelligence is worst at right now; and it's the thing without which machines often beat people (board games are a simple example.)

So there are downsides to "simple politics" or rather simple cost functions; I for one don't care about such problems very much.

I can imagine how this could happen, even though, as others have pointed out, there's no concrete evidence presented here.

I was somewhat obsessed with programming competitions as a teenager, and the first time I made it to a national-level competition I was shocked to discover that most of the people at that level were more "math people" than "coding people". Most of them had no interest in building real software. (This is not to slight them; they were also way smarter than I was.)

One of the reasons I stopped doing competitions is that I realized that, once you have your bag of tricks (especially things like dynamic programming, network flow, and the usual set of algorithms), programming competitions in the style of the IOI have nothing to do with programming.

However, I bet winners of contests like the IOCCC or the ICFP programming contest would be strong programmers with good on-the-job performance, though. (Or even winners of voluntary programming "contests" like Ludum Dare.)

How did they quantify "being good on the job"?

Based on the research I've read, one of the most valued skills for workers is communication/teamwork.

I'll speculate wildly that the loners who perform well at these tasks don't have the leadership/teamwork skills that are so highly desirable.

Almost all of the programming competitions are individual, one of the most prestigious (ACM ICPC) though is a team based programming contest.

The team can be lone-wolf organized thou since (for on-site competitions) each team only has one computer. The one I did we were 3 in the team. We divvied up the problems and each took turns hacking out the solution. Yes we consulted each other on the solution we were doing but mostly we worked on our own problem. This might have been a defective strategy since we missed the top ranked team by one problem, but had 3 nearly finished solutions when the buzzer rang.

That's Google, so probably quarterly review scores.

Have other people look at your code? If it's easy to understand, to modify etc - it's good. Otherwise, less good.

assuming it works in the first place.

This was just a sensationalist title.

Interesting. My only anecdatum runs directly counter to Norvig's hypothesis. Some years ago, in my last job, we hired into my group a former TopCoder winner. He turned out to be the most careful, patient programmer I have ever met. It would seem to take him a long time to get things done, and sometimes I got a little frustrated with this. But he worked for us for about two years, and wrote a significant though not massive amount of code; and to my knowledge, he never shipped a bug. Not one.

I think tedsanders has the right answer.

If you are not careful while solving problems in programming contests you are screwed. Because if you have limited time you don't have time too find problems.

This is flat out wrong.

It disagrees with every other data point I have, so I'm very skeptical with both the methodology (which is opaque) and the conclusion.

From all of my experience at Kaggle (we run machine learning competitions), with our community, and from being close to programming competition sites & understanding their communities, doing great at competitions is an unambiguously positive signal.

(It's worth noting that doing great at competitions is only a positive signal - the lack of competitions is by no means a negative signal).

Many of our customers have found that their best hires have come from competitions. In a lot of cases, this surfaces candidates that would normally be completely overlooked because they don't fit the "top tier CS school" mold that recruiters commonly overfit to.

Several companies have had a successful recruiting strategy built on poaching our top users (https://www.kaggle.com/users).

Peter Norvig's criticism that "programming contest winners are used to cranking solutions out fast and that you performed better at the job if you were more reflective and went slowly and made sure things were right" is specific to programming competitions with very short time durations (vs. the machine learning competitions that I'm used to running, which typically last months and incentivize solutions that generalize well).

However, we've seen that many programming competition winners also do well on machine learning competitions, and the same qualities that aid in competitive programming (creativity, efficiency, tenacity, fluidity with tools, and the ability to build something that works) help win machine learning competitions.

Kaggle is new and runs deeper contests than TopCoder and CodeJam of the 2000s.

I think you're conflating positive correlation with getting hired vs positive correlation with doing well on the job (once hired). See oskarth's comment at the top. No one disputes that doing great at competitions is a positive predictor for getting hired.

I thought that Google interview process was similar to this type of programming competitions.

Haha, exactly! That's what I find strange too.

I'd like to have more information on this. But I don't find this surprising.

I imagine people that win this type of competitions capable to work obsessively on what they like, but also to be prone to ignore other things they find less challenging.

What none of these tests, contest or not, can filter for is long term work ethic, dedication and creativity. Of course, these may not be important for all jobs. It is easy for some personality tyles to get excited and focused in the context of an event. It is a very different story to dedicate yourself intensely to a project or a codebase for years. I know people who suck at tests who blow away just about anyone due to the intense dedication to the job they can deliver on a consistent basis.

A contest winner may be either very naturally talented implying great creativity but not saying anything about work ethic or dedication. Or the person can be not that naturally talented, but had to work a lot (work ethic and dedication) to get good at it and win the competitions. At the highest levels though, i would think you would need all 3 to win an international contest.

You do good in competitions when you 'know' the question range. It certainly helps but that alone doesn't make one a good engineer.

Working conditions and competition conditions are really different and the psychological implications of a competition has it's toll on the applicant.

Target applications, time limits, thinking, documentation, help usage differs in both conditions and better results for business can not be accomplished with artificial/inorganic restrictions.

It is funny how our industry enjoys building competitive hoops for us to jump through (see also programming interviews that you have to study for). We reject employers judging devs on how many WPM they can type, why do we do this to ourselves?

> being a winner at programming contests was a negative factor for performing well on the job

The title is misleading. Being a winner of a programming competition is very different from being good at programming competitions.

I'd be interested to know what type of programming competition is being talked about here. There is a world of difference between Hackathons and Algorithmic style competitions.

They're talking about the algorithmic competitions like ACM ICPC or TopCoder. Google have their own (Google Code Jam), which they use as a recruiting tool.

People participating in competition and winning (or not) are in my opinion self driven individuals that are more suited to build their own company as opposed to be a good employee.

"Here are the exact specifications of the format of input and output into a console application, and the exact specifications of how the output must relate to the input. You have one hour to do this and your program must finish running within X seconds of being invoked on our test input."

I can't see how being talented at this would have any connection to talent at building a company, in fact I wouldn't be surprised if it was negatively correlated. I'm not really sure talent at winning programming competitions could be positively linked to other than talent at winning programming competitions. Valuable industry tasks differ substantially from what is done in a programming competition.

I agree, if you consider such question I don't see any relation either. What I meant was a general competition (not strictly programming). If you participate, it means that you want to achieve something, hence self motivation. Of course self motivated employees are great for the company, no doubt about it. But I feel that too much of it may conflict with the interest of the company.

Driven by what? If it is driven by title then you have something like this http://www.nextbigwhat.com/indian-developers-accused-of-chea...

The hard part of really building something is keeping to it over long periods of time, though. That seems quite different from being up for a competition.

Two different, mis-aligned objectives and outcomes from the same origination point. And optimization for one doesn't equate to optimization for the other. You don't say?

Could it be because those who win are often those who game the competition the most, even preparing their apps beforehand?

Nah. That's how Facebook hired interns in The Social Network, so it must be a great strategy.

Sadly, IIRC, the ICPC/IOI/TCO/GCJ/etc don't have drinking components.

This just smacks of defensive justification for lack of participation. "I don't do this and I don't want to do this, so I'm going to bash it and the people who do it so I can pretend I'm making a choice instead of just being lazy". It's a little bit like the folks who say, "I'm saving myself for marriage" when in reality they've never been on a date in their lives.

Pretending like people can't think in more than one mindset is an inaccurate model. I've never done an honest-to-goodness coding competition, but I can imagine someone who does code competitively to think very differently when they're competing vs. when they're designing/implementing a solution as a professional endeavor.

Someone would have to explain to me in much more detail how this "negative correlation" is established before I believe it.

I have refused to do a couple of time pressure hacker rank challenges as part of a job interviews recently, as it is not how I work on a day to day basis. I was good at these sort of things fresh out of university 15 years ago. I have never had to implement any algorithm under time pressure in my professional career.

Why must it be "how you work on a day to day basis"?

Why must interviews only be about exactly what you're being hired to do?

I would like to think it is at least somewhat related to what I am doing, rather than a lucky dip of a question that I might get or might not. I am a way better software engineer than 15 years ago, yet I am worse at these sort of "challenges".

There's a lot more to being a software engineer than coding competence, and timed questions reveal the non-coding competence parts of your abilities more than anything else.

Perhaps you're not as good as you think.

Like what?


Less than half of a software engineer's daily work is actually coding.

I mean what other components of your competence do the timed coding exercises reveal?

Prototyping, for one. Thought process. Creative thinking. Knowledge of relevant material.

It may not do these better than other tests, but basically any "solve this in x minutes" questions are going to reveal behavior in a person.

Why does crate thinking have to done fast? I find my best and most creative solutions come randomly after thinking about them "in the background" for a while.

It doesn't, you must not have read what I wrote.

But if I can do the same creative thinking faster than you can, why would anyone bother waiting for you?

> Someone would have to explain to me in much more detail how this "negative correlation" is established

I think it's just a statistical correlation they found in the hiring data.

I'd need to know more about the hiring data, the actual correlation method, how they collected the data, what was the sample size...

Basically, like I wrote, I would need to know more.

That's not a surprise really. A typical competition may be won by a programmer scoring 90/100 on four out of five tasks and 20/100 on the last one. Such performance at work would likely result in a quick termination.

I'm pretty sure the competition winners are not one-trick ponies either. They probably can crank out a solution to the last one as well (if given enough time -- like on a job). That's probably not the problem.

For the majority of these competitions, the programmer you described would have zero points. If you don't get 100/100 on a task, you get no points. The IOI is a notable exception.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact