

Ask HN: Is is correct to judge a developer's performance this way? - digamber_kamat

Is is fair to judge a developer's performance based on the number of bugs filed against his name?<p>Or based on the peer code reviews that are done?<p>If not can you guys tell me whats the best way?
======
lrm242
Quantitative metrics typically fail. For example, the number of bugs would be
expected to increase as the developer writes more code, so while one developer
might have more bugs assigned to him, that doesn't mean he writes worse code
than another developer.

The best answer is: you judge the performance of a developer the same way you
would judge the performance of any employee. You gather feedback and apply
your own personal assessment of that individual's performance given the tasks
and objectives you've assigned. There is no magic bullet.

~~~
russell
"Quantitative metrics typically fail." Tattoo this on your forehead so that
you se it every morning. If you punish bugs, people are going to stop
reporting them against themselves or their teammates. I often report bugs
against myself so that I have a record more durable than a sticky note.

Another demotivator is management hysteria to get the bug count to zero. Bugs
need to be reported no matter what the goal is.

The best way to understand what is is going on is to talk to people informally
(definitely not regularly scheduled meetings).

------
gruseom
_Is is fair to judge a developer's performance based on the number of bugs
filed against his name?_

Don't do this! You will get disastrous results. For example, some programmers
have more bugs assigned to them because they care more about fixing things.
People know this, so when they want something fixed they will try to assign it
accordingly. If you only look at the metric, you won't be able to distinguish
this case from a bad programmer who generates more bugs.

It's well known that metrics like this don't work. The moment you settle on
one, everybody will begin optimizing for that (instead of, say, getting their
job done). This inevitably leads to unintended side-effects.

Your best bet, assuming you're a manager who isn't working on the program
directly, is to pay a lot of personal attention to what's happening on your
team. Over time, you'll notice patterns.

There is a related pitfall I've seen many people fall into: watch out for the
case where someone _looks_ like a star performer because he's cranking out
shoddy work. This guy often becomes the manager's favorite because he seems to
get features done quickly, when what's really happening is he's getting them
half-done and making a mess in the process. Worse, truly good programmers then
compensate by taking time to clean up this guy's messes, which makes them
appear less productive. One bad programmer like this can easily suck up the
value of half a dozen good ones. The solution is to make sure not to take
things at face value, but to watch carefully to see what effects someone's
work is having on the product and team as a whole. This is hard but doable;
everything that's going on in the code is ultimately visible at the human
level if you know how to read it.

------
wlievens
Any quantitative metric carries perverse incentives.

My co-workers have a legend that backs this up, from the time before I joined.
They had a quarterly bonus system that would be adjusted for defect rate. If
you had defects against your code, you'd lose on the bonus.

Needless to say that this resulted in a "black market" for defects, with
little gain in quality or productivity resulting.

~~~
jacoblyles
Of course, qualitative metrics carry perverse incentives as well. Classically,
they encourage employees to suck up to whoever makes the qualitative
assessment.

------
makecheck
Number of bugs definitely not, especially if he's maintaining a large or well-
established code base. It is common for someone to be bogged down with
problems rooted in someone else's past design. Also, if you're going to give
bug reports any weight, you have to read them fully; a single bug report could
point to incredible incompetence, while another developer's 20 bugs combined
could all be really minor issues.

Code reviews might be more applicable. But, like bug reports, the code being
reviewed could be as much the fault of past developers as the person trying to
make changes. There is also a tendency for peers to focus on minor issues like
coding style, so make sure your reviews are covering real problems before you
attach too much weight to the results.

Having said that, this is what I would do:

First, judge a programmer based only on the experience he claims to have. It's
just not fair to dump a new programmer onto a team and expect brilliance; if
his team is relatively brilliant, he will always appear incompetent. You
should, however, expect improvements over time (such as, less time required to
fix things, or more instances where he speaks up with new ideas).

Next, judge a programmer based on individual work. Every programmer should
have some task, even a small task, that is only theirs, even if the majority
of the work requires teams. This makes it easier to judge how the programmer
tackles problems, how code looks, etc. This also avoids the pitfalls above, as
there are no excuses. Also, some programmers work best alone; you should be
seeing their best work, and if _that_ is shoddy, you should get rid of them.

Only after considering the above, should you look at team work on large code
bases.

------
jacquesm
The fairest way to judge a developer - any developer - is long term stable
contribution to your project, in other words if they bring you value for the
money you spend.

If someone is a perfectionist takes a year to complete a small item then it
may be perfect, but by the time it is done (0 bugs to fix) the company may be
bust.

The opposite, someone that is extremely productive but so sloppy that you are
basically thrown in to 'crisis mode' by default is also untenable and not
value for money.

The sweet spot is somewhere in the middle, good enough to go to production in
an acceptable time.

Things like 'bugs fixed', lines of code or anything like that are not going to
cut it because they can be inflated or spoofed with great ease.

When you hire a developer you hire time from somebody else for your dollars,
as long as it is worth to you what you pay for it you have a good developer.

You can start 'qualitative' comparisons by employing several developers in
parallel, seeing which ones work best and then slowly over time try to improve
your 'mix'.

There are no absolutes in this.

------
michael_dorfman
There's only one metric that counts: (perceived) value delivered to the
customer. Everything else is just a proxy.

------
cschneid
I'm going to slightly disagree with people here.

Using quantitative measurements to track developers is great. But not when
done stupidly. Using only one measurement is dumb (ie, # of bugs alone, or
lines of code, or number of checkins). And removing human participation is
dumb too. You're a manager, not a spreadsheet.

But, using quantitative measurements along with human feedback, peer review,
and other traditional measurements gives you an early warning setup, and a
better way of tracking changes in behavior.

This then allows you to have discussions with an employee like "I'd like you
to be more productive, it feels like you've been slacking a bit recently".
Then, you have both visual clues (they're at the desk, not with HN open), and
also graphical clues (hmm, your story points completed per week went up 10%,
awesome).

It's another tool. Don't confuse it with the only tool.

------
digamber_kamat
We have a consensus here :) Thanks all of you !

------
Confusion
I have found that it is impossible for me to write code without bugs. However,
I can write code with such bugs that I know within a few minutes where to find
the bug and how to fix it. Not all bugs are created equal and I pride myself
on creating shallow bugs, that are easy to fix.

------
BobCat
Let me assume you are said developer:

"Is is fair to judge a developer's performance based on the number of bugs
filed against his name?"

Is it fair to judge _your_ performance by how many bugs _you submitted_?

The only answer is _yes_.

~~~
digamber_kamat
Assumptions should ideally have relevance to conclusions besides "you
submitted" is a wrong interpretation of my question. I have specifically used
the words "bugs filed against his name"

