Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Joe paints a three bedroom house in three days and gets paid $300.

Jim paints a 5 ft by 3 ft watercolor landscape in three days and sells it for $1,000.

They both get their job done in three days. One covers far more area with paint than the other. One gets paid more than the other. Both of their customers are happy with the result because they paid for it. Who is the most productive?

Is it even meaningful to ask that question?

Doesn't it depend upon WHAT it is that is produced? Doesn't the WHAT have to be the same in both cases to be able to say which is more or less?

Perhaps before we talk about being more or less productive we need to identify what it is that we are producing. We should make sure its real, meaningful, and relevant to our purpose and that we are all talking about the same thing. Then finally, we must measure something that can't be easily inflated or faked. Only then can we talk about more or less comparisons and be saying something other than words without real meaning.

As I see it we have two questions to answer:

What is it that we are actually producing when we write software?

How can we reliably measure it?

We don't have good answers for either. Actually, I don't think we even have moderately good bad answers.

Isn't it interesting that a large and increasing fraction of the world's economy is based upon something we don't really know what is and can't actually measure?



Your painting analogy is really solid. There is a remarkable range in the way that software needs to scales and that captures it.

    Doesn't it depend upon WHAT it is that is produced? 
A productivity issue that doesn't get discussed often is the bridge between software and the people who work with it. For example - developers often draw the line at "works on my machine" or "I can deploy it to production" or "I can work out bugs". But these things don't take into account the scenarios of customers, deployers and support operators - documentation, logging, troubleshooting and the amount of time spent by those customers and anciliary staff doing those things. Effectiveness at these things is generally far more important than whether a programmer can write something in one day or five, but the typical developer is a blinkered creatures.


when we're measuring productivity, what are we trying to measure? i mean, really? ultimately its how much value is created, in what span of time. relevant to this example, the questions to be asked are:

1) how much does joe's paintjob increase the value of the home for the HGTV-obsessed house-flipper?

2) how much will the value of jim's watercolor appreciate after he kills himself following a week-long binge of heroin?

put another way: how efficiently is value being created? the important answers lie more often in the decision process, and less often in the implementation schedule.

for example, a web developer working on a brochureware site: the right question may be along the lines of 'how will the work you're doing affect the conversion rate?'. in reality, those decisions aren't up to him -- the might lie in the internet strategy department or some such. they push the order, the developer implements. a real measure of results is closer to 'did the designer make the right decisions to yield results?' rather than 'did the developer get the code done more or less quickly than last time?' in my experience, when someone who is a non-developer is trying to quantify developer 'productivity', what they're really doing is dodging the question of whether or not their decision-making was good or bad, usually when they know it's bad and are looking for something to roll downhill.

more interesting to me, as a developer and sometimes manager, is whether or not a developer can deliver when they say they can? if clay says it'll be done in a week, and then it is, he's been sufficiently productive. 'sufficiently productive' is maybe the best thing we can do. the race analogy is an apt one -- if developer 1 can produce a set of functionality with the same defect level faster than developer 2, we can say that d1 is 'more productive', but it's a difficult thing to quantify absolutely. take the number of hours a developer spends working on a given project point, multiply it by their hourly rate, and you get a development cost for that project point. then consider the value that project will add to the product, over time. if you have 30,000 customers who haven't bought the product yet because it doesn't have Feature X, and adding feature X costs dollars Y, then your value-add is something line Z= customers * product price. your efficiency is something like Z/Y.

clearly you want to minimize cost, and therefore keep your efficiency as high as possible, but the ways to manipulate that, and a lot of points to consider:

a simple example -- if you can hire a developer that will take twice as long, but work at 1/3 of the rate, would the revenue lost during the extended development period negatively affect the profit efficiency?

the important questions are somewhat higher up.


Hmmmm..... Maybe a not so horrible bad answer is as good as we can do.

I like the notion that productivity should be considered as rate of delivery of value. Unfortunately, this gets into the sticky questions of value to whom and for what purpose. An even more sticky question would be how to measure it if you can answer the first two questions.

On the "somewhat higher up" questions. Are we to hold the programmer responsible for such things as "conversion rates" which may be more impacted by advertising, quality of web site, and momentary media buzz than by anything the programmer did? If so, how is this measuring the productivity of the programmer. Its difficult to make the connection. That is unless you are the whole team from start to finish.

Perhaps the best we can do is compare present and past "productivity" to see what has improved and what has not. If there is a net improvement, then "productivity" has increased else not. Trying to put a number on it beyond plus or minus may be a hopeless dream.

Still, for something that seems to be driving the world's economy, a hopeless dream is not very satisfactory.


no, we absolutely should not hold the developer responsible for the higher up questions -- the responsibility should fall on the shoulders of the person who made the decision. did the developer implement what he was asked? did he render solid feedback which was promptly ignored? if so, he bears no responsibility for the quality of the decision, only the quality of the implementation. if he made the decision, of course, that's a different matter. and in any case, the measurement of his 'productivity' is meaningless as a part of this metric.

comparing two programmers is possible. comparing a single programmer against a theoretical ideal is improbable. finding a remotely accurate formula for measuring developer productivity and yielding data that's actionable in the business space is really, really unlikely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: