Hacker News new | past | comments | ask | show | jobs | submit login




That's a great story. I can't resist quoting Dijkstra: If we wish to count lines of code, we should not regard them as "lines produced", but as "lines spent" -- the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.


I am very inspired by Dijkstra's high level ideas on programming. Importantly, one of the fundamental assumptions of Dijkstra was that you could actually understand your code base and reason about it. The creation of excessive abstraction may create a degree of robustness that protects against programmer's who don't understand the code base, under the assumption that no one will, but at the cost of eventually ensuring that no one will be able to understand the entirety of the code base or even reason at the macro and micro levels efficiently at the same time for a large part of the code base.


"the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."

Never seen that. Thanks for sharing as it might be a great way to drive point home to business types. The amount of code certainly turns [for developers] from asset into a liability as it's size and usage grows. Hmm. Maybe such a presentation could always consider the amount of code a liability or neutral asset that produced benefits on the other side or reduced them. A well-stated connection between the two might justify reducing technical debt.


The kind of managers who use lines of code as a performance metric are the kinds of managers I avoid like the plague. It usually indicates that they don't understand what I am working on and are likely to reward code firefighters more than code surgeons.


What do you think is a good measure of productivity?

In my opinion it is the number of requested features shipped minus the number of bugs introduced. Weighted by the importance of each, as collectively decided on by everyone or client.


Features is the wrong metric. There is no good quantitative measure, because what matters at the end is the degree of effectiveness at addressing a human need, which is always fuzzy at the heart of it, even though we may try to abstract that fuzziness into something with which we can work.


Not that I believe that you can boil down productivity to a single metric, but I've often found one of the best proxies for productivity is the number of automated tests added or modified. It's one that can be gamed, but absent that, it comes pretty close to capturing the amount of work that a programmer has completed since more complex features require more automated testing. It's also pretty easy to pull from CI build results, along with how often a developer was breaking and/or fixing the build.

Combine that with a code review process that will surface commits with excessive or insufficient tests and it was one of a couple ways I validated my feel for the work of my direct reports when it came time to fill out reviews.


Why the number of times that I break the build should matter in any case? I think that I break the build several times per week because when I test the application using F5 in visual studio it doesn't do a full build and so the tests may not compile. Furthermore, even if they compile they may be broken. This is the primary purpose of the build system, to run the tests and to fail when something is wrong so I don't have to spend time running tests on my machine. Once they fail on teamcity I fix them. And I will be very surprised if anyone of my colleagues complains about it given that only I use my feature branch.


Breaking your feature branch isn't breaking the build anymore than breaking the build on your local laptop is. "Breaking the build" means breaking master or a release branch. Those are the ones that inconvenience others and limit response time to critical production issues.


I think that's about as close as you can get to a usable metric.

Of course, it still has problems. The metric probably has to be calculated long before the number of serious bugs is even known. On the other hand, good developers anticipate needs; there may be features implemented that the users haven't even realized they wanted yet. And of course, it's hard to come up with numerical weights for the features and bugs.

But the worst problem with this metric is that it doesn't count the maintainability of the code. That's an even harder thing to measure, of course.


What do you think is a good measure of productivity?

Profit? The trouble is it is so far removed from the day to day work we do it's almost impossible to draw any direct conclusions. So instead we start trying to use proxies like function points, bugs or lines of code. In the most dysfunctional organisations worse metrics get used like time keeping a seat warm or political skills.


There is a development method called "Direct Development" which has arisen as a term to describe the organic programming model that many profitable APL endeavors have followed. It helps to eliminate the issue of metrics by eliminating the divide between the programming/IT unit and the users of the software. In companies that are using Direct Development, the metric is "are they giving me what I need?" And the way they accomplish this is by pair programming with the users of the software themselves. That's right, the users of the code actually participate directly in the development of the code. They write the code in such a way that the code itself becomes the specification analogue and is in fact the shared knowledge between the coders and the users. Not the comments, but the code itself. The users read the code, work with the programmers, and they make changes on the fly (with appropriate QA).

If your users have the confidence that they can walk up and get any feature they need implemented with you, that's about the best metric of success I can think of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: