

Hiring Tip: Pair Program on Open Source Bugs - whit537

During our last hiring round, I had a really nice cross-browser bug at hand, and I ran our finalists through a coding session where they remoted to my laptop and had one hour to fix the bug or get as far as they could. Instead of a contrived puzzle they were working on production code. It was great! And since they all worked on the same bug, I was able to score them on a ten point scale for easy comparison.<p>We've started another hiring round, and this time I didn't have any good Python bugs at hand. What to do? Open source! We use the CherryPy framework a fair bit here (the maintainer is on our team). So this round I am pairing up for an hour of work on CherryPy. It moves an open source project forward, it's cleaner than having people work on proprietary code, and we still get a great picture of each candidate. Win all over the place. This time I am inviting candidates to share their screen with me rather than vice versa. Last time I tried at least installing the person's editor of choice, but there's no place like ~, ya know? Another thing that happened last time is that by the end of the interviews I had gone through the exercise so many times that it felt much less like pair programming and much more like thumb-screwing. At the expense of a neat scoring system, I am not going to go over the same ground this time. I already updated one CherryPy ticket, and we'll keep chipping away at that one and others. I expect some level of familiarity with Python and its tools, but I'm happy to teach you what I know. A nice side-effect is that I get to learn what you know too. :^)<p>We've got three spots open and we're just starting interviews, so if you'd like to pair together for an hour on CherryPy, hit me up at chad.whitacre@yougov.com.
======
djb_hackernews
Lots of arguments on why not to hire this way, but if it works for you, all
the best.

However, do you find a pattern with candidates hitting snags that aren't
really technical? Like, environmental issues etc?

Think really the only way to do this right is to control it as much as
possible, possibly providing a failing unit test, or extremely detailed
instructions. I can see a lot of wasted time trying to figure out
dependencies, system path problems, etc

~~~
whit537
What are the arguments? And FYI, this isn't the only interview we do. It's one
data point.

I go back and forth on the question of "snags that aren't really technical."
The fact is that on a distributed team with multiple projects going, setting
up a screenshare and a dev environment for an existing project are not
uncommon tasks in their own right.

The first round cross-browser bug was controlled. But then the candidate isn't
in their own environment and I'm getting burned out doing the same problem
over and over. I want it to feel like we are working together on a problem
rather than like the candidate is trying to guess an answer that I am trying
to hide from them. This way it is genuine: I have no special canned knowledge
of the problems we are working on.

I'm trying to simulate an actual pair programming exercise ... by doing an
actual pair programming exercise. ;)

------
intesar
I bet you will never attract the best, only those how are desperate will spend
an hour with your tedious real bug solving interview. I'm pretty sure you are
happy that this thing of yours is working out but let me tell you are not
getting the best.

~~~
whit537
You're right. If someone thinks that writing code for an hour is a tedious act
of desperation, they are not the right candidate for us.

~~~
abbasmehdi
Boom! goes the dynamite... Loved your response. What exactly are you testing?
What are you looking for, traits wise, and which aspects of the test reveal
the said traits?

I am not planning on applying, but am I just interested in learning more about
your methodology. On the surface, it seems way better than those silly "how
many potholes in America" or those memorizable algorithms.

~~~
whit537
:)

With the first round of hiring I looked at how far the candidate got in fixing
the bug. There was a workaround and then there was a fix for the root cause.
If the candidate found the workaround quickly then that was equal to fixing
the root cause less quickly. Almost no-one actually fixed it though, so mostly
it was a measure of progress. How efficient are you at developing hypotheses
and testing them? How good are your hypotheses?

Now in this round I can't cross-compare as easily, so I'm expecting it to be
more subjective. Basically I'm looking for people that I have a hard time
keeping up with.

