Hacker News new | past | comments | ask | show | jobs | submit login
‘Laws’ of Software Development (exceptionnotfound.net)
315 points by btimil on April 26, 2016 | hide | past | favorite | 71 comments

Missing some big ones:

Gall's Law: "A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."


Conway's Law: "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"


Cunningam's Law: "the best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer"


Also, Kernighan's Law: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

Notice that this can be interpreted in two ways. The hacker's interpretation is that, if you want to improve your smarts then you have to code as cleverly as possible, otherwise you cannot learn anything when debugging! This is called kernighan's lever: http://www.linusakesson.net/programming/kernighans-lever/

This is perfect! I've never heard of that interpretation before. Thanks for sharing!

Was bitten many times by this one while coding stuff in Scala.

There are many other laws, e.g.:

Zawinski's Law of Software Development: Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.

Atwood's Law: Any application that can be written in JavaScript, will eventually be written in JavaScript.

But there is perhaps no greater legal scholar than Norman Augustine:

Law XVI: Software is like entropy. It is difficult to grasp, weighs nothing and obeys the Second Law of Thermodynamics: i.e., it always increases.

Law XXX: The optimum committee has no members.

Law XXXV: It is true complex systems may be expensive, but it must be remembered that they don't contribute much.

Much recommended: http://www.amazon.com/Augustines-Chairman-Lockheed-Corporati...

Law XVI's exception is XFS, which has had a steadily declining LOC count for years.


It's not an exception. Like entropy, software can be reduced locally, but it can never be reduced globally.

If one replaces two lines of code with one line of code, there are three lines of code in the world. Information cannot be destroyed.

I take it you've never had a failed backup restore?

Security is a forethought, not an afterthought.

Performance is a constraint, not a goal.

The 15 laws succinctly:

Do what is simplest, and remember, everyone is attacking you with stupidity. Most of your simple work will be done quickly, a small amount will be done in the time remaining. You will think you are better at doing all this than you are. To wit, seek the help of others to talk you down-- accept things from others. You'll need it because eventually you become a stranger even to yourself. And from there you'll rise to the level of stupidity you detested not too long ago. Or maybe some unwise fool will intercede on your behalf, dragging you up to their level of idiocy. This will all take longer than you think, because all along the way you underestimated your skills, but don't worry there'll be plenty of work to fill the time. Most of it will be petty, and you'll fight over the least essential things, but that's because you chose to work in a field full of smart people who continually underestimate their abilities.

Love this. Though isn't this more about over estimating our skills?

Seems to make more sense given "we think we are better at doing this than we are".

I think I'm going to print this out in a big font and hang it in my office.

Careful with Postel's Law: it brought us a lot the mess we have with web standards.

Quoting wikipedia https://en.wikipedia.org/wiki/Robustness_principle:

> In RFC 3117, Marshall Rose characterized several deployment problems when applying Postel's principle in the design of a new application protocol.[3] For example, a defective implementation that sends non-conforming messages might be used only with implementations that tolerate those deviations from the specification until, possibly several years later, it is connected with a less tolerant application that rejects its messages. In such a situation, identifying the problem is often difficult, and deploying a solution can be costly. Rose therefore recommended "explicit consistency checks in a protocol ... even if they impose implementation overhead".

The other big problem with Postel's law is that it makes enforcing security guarantees a nightmare. When each component in a system is "liberal in what it accepts", it's very easy to fall into a trap where component A thinks that component B is performing the security checks and component B thinks that component A is performing the check and as a result the security check is either never performed or trivially sidestepped.

A good example of this was the vulnerability with null characters in SSL certificates [0]. In that vulnerability, both the browser and the certification authority interpreted the domain with the null character as "liberally" as possible, but their differing interpretations of what constituted "forgiving" made it trivial to spoof SSL certificates merely by including a null character in your domain.

Another example is with guarding against XSS vulnerabilities. Browsers try to be liberal in what constitutes valid Javascript, and will attempt to execute even the most malformed scripts they encounter. This makes it unnecessarily difficult to shield against XSS - one must block a whole range of vectors, rather than just script tags.

[0]: http://trevorjim.com/postels-law-and-network-security/

How I wish more engineers would take this precaution, as it is arguable the law caused harm over good, well intended as it might be.

Engineering with unspecified bounds often needlessly introduces complexity and cost.

That may be true, but it's only a problem if you believe we are capable of designing sane, useful standards initially. Postel's law allows a limited form of evolution for standards. Sometimes that allows for useful extensions to be adopted into later revisions, other times it can run rampant and create monstrous mutations (which might still be accepted. I'm looking at you, IMAP).

Being liberal in what you accept is fine, iff strictly define the variations in the grammar.

Meredith and Sergey's outstanding talk about properly parsing all input discusses this update to how Postel's Law should be interpreted.


I would argue that if you're strictly defining the accepted variations in the grammar, then all you've done is define a new grammar that you're conservatively adhering to. And that's fine! There's no rule that says that standards cannot evolve to match new use cases. However, I would argue that having a strictly defined (yet evolving) standard is much different than the common interpretation of Postel's Law, which is that undefined behavior should "do the right thing", with the specific interpretation of "the right thing" being left to the implementers. Predictably, different implementers interpret that in different ways, which leads to security holes. But if the specification defines what the right thing to do is, then the behavior is no longer undefined, and therefore Postel's Law no longer applies.

Thanks! The link is missing a trailing `y`.

Yeah. I agree that this is a slippery slope. SMTP followed this, and there were a lot of whacky things, then qmail came and rejected the bad implementations.

Perhaps it's better to use this law for UI, where being liberal with accepting things by trimming white space is better than rejecting, which would confuse an end user.

Perhaps, but we introduce a new implicit specification by doing this, possibly closing the door for situations where white-space is relevant.

The common appearance of the ignorant user seems to stem from him generalizing patterns as a holdover from one UI to another. Perhaps we should better study these patterns and steer the user to a consistent path. But instead, it seems we're trying to out-smart him time and time again, creating a jungle of user interactions which merely belittle and confuse.

I think it's okay when dealing with input from hairless monkeys. That people just aren't particularly consistent is part of the problem domain.

Not so okay when dealing with input created by programs.

And as you said totally not okay when dealing with protocols, encryption, and authentication. See padding attacks.

This is better observed as having a separate standards compliance check and 'reference implementation'.

The standards compliance check evaluates if a given program or module complies with the written standard. The reference implementation is free to be more fault tolerant or otherwise assume that invalid input is allowed to produce undefined results (as long as they are not clearly the wrong behavior).

The Gervais Principle is worth reading about:


Related to the Peter & Dilbert Principles, but darker and told through the lens of the Office.

It took me two days to read it, so I'll provide a summary:

In a corporate hierarchy are the losers (bottom), clueless (middle), and sociopaths (top).

Losers: Economic losers in that they prefer a fixed monthly paycheck over their fair share for their contribution. Risk-averse. They know they made a bad deal and don't do more than expected. They play social validation games with each other to feel better in order to stand their fate.

Clueless: They believe in the corporate story and work more than they're payed for. They're promoted into middle management where they serve as a buffer between the top management (sociopaths) and the workers (losers). They execute all ideas from top management, from the stupid over the risky to the good. If something succeeds, the sociopaths take the credit. If something fails, the clueless take the blame.

Sociopaths: Nihilists who emotionally broke under the realization that no true values exist in this universe, so they're free to make values and goals up as they like. Those stories are given to the clueless to believe in. In every corporate life cycle exist different sociopaths. In the beginning they're the risk-takers with a vision and the willpower to push something new through. In established companies they kill off new and threatening ideas to protect the successful products. During decline they milk the company for the remaining values, then leave the sinking ship and hand responsibility over to the clueless to let them take the blame.

One important part are the different languages of the three classes. Losers have "game language", the social games they play to validate each other, but the talk and the content are unrelated to reality and don't advance them in the hierarchy. The clueless try to speak like the sociopaths but don't realize that one has to have stakes in the game (i.e. something to lose or something valuable to offer) in order to be taken seriously, so their language is full of empty phrases and threats. The sociopaths' "power talk" is full of hints (without commitments) that are only understood by equals and when they talk about something they have to lose or gain something -- it's about the real things, not the empty phrases the clueless use.

A seminal piece from that author, and is still relevant 8 years later.

One of my favorite quotes -ever-:

Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction.

E. F. Schumacher

It seems that perfection is attained, not when there is nothing more to add, but when there is nothing more to take away. — Antoine de Saint Exupéry

Don't forget Wiio's laws ("Communication usually fails, except by accident"). See https://www.cs.tut.fi/~jkorpela/wiio.html

Though I had adopted the maxim s contained within as worthy of remembrance, I had searched in vain to find this page again after my first encounter with it several years ago; thank you for restoring it to my bookmarks list.

I'd be tempted to add Brooks' law, that adding staff to a project that's already late just serves to make it even later.

My favorite one from him is: "How does a project get one year late? One day at a time." as it captures the priority decisions that you make consciously or unconsciously all the time.

Brooks' Law: "Adding manpower to a late software project makes it later."


Over on the education side of things, we have a lot of problems with Campbell's law. Once people focus on a quantitative indicator to evaluate things of any importance, it essentially gets corrupted. Like test scores, university rankings... On the software side of things, there might be things ranging from website hits, ad clicks, page rank, karma points, etc. More on the development side there might be things like commits, github stats, etc.


ReRe's Law of Repetition and Redundancy

A programmer can accurately estimate the schedule only for the repeated and the redundant. Yet,

A programmer's job is to automate the repeated and the redundant. Thus,

A programmer delivering to an estimated or predictable schedule is...

Not doing their job (or is redundant).

"You can never learn from the 14 laws without experience but with experience you will know what you should have learned from them." -webdude

Greenspun's Tenth Rule: "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."


Dijkstra's Law: "Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better."

    90% of this list is crap -- https://en.wikipedia.org/wiki/Theodore_Sturgeon

The DateField hypothesis :

For any given reasonable estimate for an item of software, add 2^N days to that estimate, where N = the number of date/datetime fields involved on that form or database table.

eg - You have a complex data entry screen to add to a webpage, and you KNOW for certain that you can complete it thoroughly in 2-3 days tops, testing every possible edge case. Good stuff.

However, if that screen has 2 date fields involved, then its going to take 2^2 or 4 extra days over and above the reasonable estimate to get it done properly, and handle NULL dates, timezone differences, comparison of dates for equality, parsing date inputs, converting internal representations between front end / backend / storage ... and every other unexpected abomination.

I think a better conclusion to draw from Dunning-Kruger is that self assessment can be problematic.

I would agree with this. It's broader and still applicable.

Dunning-Kruger actually said everyone is bad at self evaluation, not just the low skilled.

And that neatly applies to his law of argumentative comprehension, which should really read "The more people _think they_ understand something, the more willing they are to argue about it, and the more vigorously they will do so."

The Hanlon's Razor portion vastly underestimates the sheer amount of contempt and vitriol that many people in business harbor for their technology staff.

Sometimes (oftentimes, in my experience) they really do hate you.

Occam's razor has more priority than Hanlon's razor.

Sometimes assuming just a tiny bit of malice can account for layers and layers of presumed stupidity.

Atwood: "Any application that can be written in JavaScript will eventually be written in JavaScript"

Zawinski: "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." s/mial/API etc

And I can't find this one (IBM study?) "Adding 10% of features increases cost by 100%"

> Adding 10% of features increases cost by 100%

Too bad the inverse isn't true. Would be nice if we could cut costs by 100% just by ditching a tenth of the features.

The inverse gets you bumping up against Pareto, though, so...pretty close.

Valid not just for software development but any kind of engineering effort.

A funny and ironic law:

Good software will grow smaller and faster as time goes by and the code is improved and features that proved to be less useful are weeded out. [from the programming notebooks of a heretic]

If I were to quote the above to a layman--someone well outside the world of software development--the response would be, "Well of course that's what would happen, it's obvious."

For every of stakeholder involve in your project is completion of project delay by 6 month.


Linus' Law loses water when you consider using formal specifications, model checking, and -- if you have the budget and time -- proof. These systems and languages have come a long way and are far better than any human at checking your work and telling you where you are making mistakes.

Model checking in particular, in my limited experience using it, is effective at this.

You can eliminate many (not all... nothing's perfect) of these rules of thumb with a little math and analysis[0].

[0] https://www.amazon.ca/Programming-1990s-Introduction-Calcula...

The guy bf

the more complex a software system is, the more the people not involved in its creation will criticize the colors and placement of buttons on the UI.


I'd like to see a list of analogies, so I can explain better to my boss/clients/friends why constructing software is hard.

My favorite reason is: By definition nobody builds software that's ever existed before.


I've worked on CRUD apps for my whole career, and I've never seen one that didn't have some novel requirements.

Even when you're using some base that already exists (often a highly configurable thing that's been sold to you by someone else), you STILL have the frequent stories of nightmares involving either an inability to configure it to the desired specification or cost and time overruns of epic proportions.

YouTube is essentially a CRUD app.

What about Conway's Law?

oh no! Not another set of laws. I can't remember so many.

~~laws~~ rules of thumb

A "law", in the scientific sense, is really just a detailed observation.

Good point.

Note also that some "laws" may remain as "laws", even in the face of experimental evidence which directly contradicts the predicted results of that law.

Which gives rise to "The Law of Research Funding", in which the amount of funding that can be attracted to support a given law is proportional to the amount of real world experimental evidence which contradicts that law. :)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact