Hacker News new | past | comments | ask | show | jobs | submit login

Haven't read anything ever, including the subject of this thread, and including any number of language-advocacy opinion pieces, and including software development methodology opinion pieces, that in any way compares with the empirical data-based work of Manny Lehman (R.I.P.).

http://ai2-s2-pdfs.s3.amazonaws.com/5454/5a907a43c798c1193be...

His resulting model of software evolution explains many observations we engineers make about "feature creep," "technical debt," balance of effort between development and maintenance, etc.

I think his model also helps point the way to techniques we can use (e.g., DSLs in preference to OOP) to help improve how we build and maintain software.




These aren't laws of nature. It's just people being shit. The solution is fixing our culture so that people aren't shit.


Sorry to not understand exactly what you mean by those two statements. Would you please clarify?

I of course realize that people are not perfect (along any axis at all), but I think the importance of this work is highlighting:

(1) software must evolve in order to continue being useful (is this the "cultural" part of your comment?), and

(2) that there are some unavoidable limitations of human capability to achieve that (is this the "people being shit" part of your comment?), particularly in dealing with complexity. And that complexity is an increasing function of lines-of-code count.

So, the obvious conclusion to draw (for me, anyways), is we should work to minimize SLOC by various means: DSLs, &/or expressive/powerful languages.


(0) We need a culture that penalizes writing incorrect specifications and programs as harshly as possible.

(1) we perform vastly below our potential because we spend so much time working around our own and each other's bugs. People are shit not because they're dumb, but because they stick to the wrong attitude in the face of disastrous results.


Regarding (0): I am not sure what mechanisms for penalty would be better way than those currently in play (e.g., disuse/cloning in the case of open source, bankruptcy in the case of closed source, etc.).

I believe Lehman formalizes the dichotomy you imply as bugs in the "model" (you say "specification"), and bugs in the model's implementation. I think the distinction is important in that the model/spec is an evolving/moving target, subject to evolutionary forces. And the number of bugs in the implementation of any given model's snapshot in time is an increasing function of SLOC, and that updating the model can produce more/latent bugs in previously un-buggy code. (Which I take to mean we need minimize lines of code for any given amount of functionality. Less to write, less lines containing bugs, less to modify, etc.)

Regarding (1): I agree that we perform vastly below our potential due to bugs, but also many other factors - though it might be hard to agree on the what those contributory factors are, and their relative contributions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: