
Ask HN: Developer performance metrics? - canterburry
Does anyone know of, or think, a system for measuring useful developer performance metrics?<p>I was thinking of something which is purely data driven, connects to build, source control, code quality metrics reports etc and builds a picture of each developer on the team based on their contributions, breaks, fixes, # of languages worked on etc?<p>Do you think it&#x27;s even possible to build a good picture of a developer&#x27;s performance based on data driven metrics?
======
allanmacgregor
Going through the same process myself, short answer there is no easy metric
that you can add that really quantifies a developer.

I would start measuring projects or teams first, getting a solid set of
metrics for each and once you have that you can start tracking down individual
developers.

Even so tracking developer productivity can be hard, a good chunk of our work
happens in our head. Who is more valuable the developer that wrote thousands
of lines of code or the one that changed one but fixed a critical bug in
production?

I highly recommended this book
[http://shop.oreilly.com/product/0636920020134.do](http://shop.oreilly.com/product/0636920020134.do)

~~~
ctoestreich
Moreover, "bugs" can linger around in code for a long time and you can't
really track that metric. Code changing over time in a repository could be to
fix bugs or refactor/rewrite etc. I think the original posters question was
more around whether there are existing resources to track these things to
which the answer I feel is no. Combining blogs posts, stack exchange score,
speaking topics and other real world things can give you some semblance of a
developers engagement with the development community but doesn't translate
one-to-one on their actual productivity levels on a team. Although these
things usually indicate a greater potential for success.

------
canterburry
My challenges are mainly around retention and grooming the team. We work with
lots of offshore resources who change frequently and whom we don't talk to
directly always. However, when changes happen we want to retain the best and
know who to cut. I'd also like to know who I can pull from where when a new
project comes up.

I was hoping to gather metrics in the following dimensions:

1\. Versatile (can work on multiple stacks and languages) 2\. Team Player
(watches out for the team and does what needs doing regardless if assigned to
them) 3\. Detail oriented and delivers quality code (doesn't break the build,
good test coverage, adheres to formatting guidelines) 4\. Productive 5\.
Communication skills (can and does communicate appropriately)

For many of these, we could gather proxy measures, i.e.

1\. Versatile (in the last year, how many different file types did this person
touch or work on. 2\. Team Player (did they fix a build broken by someone
else, how often do they fix their own broken builds)

etc etc

~~~
sheepmullet
> 1\. Versatile (in the last year, how many different file types did this
> person touch or work on.

These kind of metrics never provide accurate information. You might as well
just pick your "good" devs from a hat.

For example, I'm the main code reviewer for our front end solution but I
haven't written a single line of code for it. Am I less versatile than a dev
who does basic work across the entire project?

~~~
canterburry
I agree. Leads and other management roles are more difficult to peg but
usually this is known simply by the person's role. A good dev lead could be
judged by the number of code review comments or % code reviewed. Since a lead
is also responsible for the team, their performance could be judged by overall
team measures such as % test coverage of the overall code base, amount of time
a ticket sits around etc. The team's metrics may become a way to judge the
lead?

But I agree, it's all imperfect. I also don't think these metrics would be the
only criteria by which someone is judged.

------
ramtatatam
I'm facing the same problem - need to measure performance for each member of
my team and indeed no easy solution here.

Currently I'm measuring:

\- How many tickets have been closed by each developer (we use mantis)

\- What is the (average, average of 5 best, average of 5 worst) time for
solving tickets

\- For each average from above I obtain number of lines changed and also
calculate average (mantis connected with git)

So for given period of time I know:

\- how many tickets were closed by each dev

\- in average - best time, worst time and usual time for solving tickets

\- in average - amount of lines of code for worst time tickets, best time
tickets and usual time tickets

I then look if lines representing average score are closer to lower or upper
averages.

However all above is just to provide some orientation, my team is still small
enough so I know each developer myself and also I'm involved in solving most
issues so I base on that knowledge.

~~~
gus_massa
I'm not sure about tickets, but I grade a lot of math tests. In some cases,
there are a lot of easy to grade tests (almost empty, straightforward
solutions, easy to read fonts, ...) and a few very difficult to understand
cases. Some of the graders "specializes" in difficult to grade papers, they
read very few, but the contribution is very important.

Not all bugs/tickets are created equal. Just counting them would incentivize
to cherrypick the easy ones.

~~~
ramtatatam
That's why number of tickets is only part of the story. The time and lines of
code add additional insights. Anyways this was never meant to be "precise"
indicator - the goal here was to have some sort of analysis that is easy and
fast to come up with.

------
weee_username
When I'm measuring developer performance I look at one stat: Do they make
reasonable deadlines without generating a ton of bugs?

Why is this? Well, it's because the amount of tickets, code, or even time
spent "logged in" doesn't matter. The only thing that matters to a business is
if they can deliver on time so the product can keep moving forward and others
aren't waiting for them.

~~~
ramtatatam
So you assume your devs are senior enough to come up with achievable deadlines
even when they are under pressure. Such methodology will work in favor to
senior people or people who can negotiate better time frames to deliver their
bits.

------
ctoestreich
I am also curious about this.

