
Famous Laws of Software Development (2017) - pplonski86
https://www.timsommer.be/famous-laws-of-software-development/
======
redcodenl
Regarding Conway's Law:

We have found that by changing our software/system architecture we have also
inadvertently changed our organisation structure.

\- Inverse Conway Law or just Roy's Law ;-)

Before we had four cross functional teams, working on a single application,
everyone felt responsible for it, worked overtime to fix bugs etc, we had good
communication between the teams.

But after we switched to microservices the teams became responsible for just a
part of the system, their microservice(s). Whenever we had an outage, one team
was left to fix it, the others just went home. They stopped talking to each
other because they didn't share any code, no issues... they stopped having
lunch together, some things got way worse in the organisation, all sparked by
a 'simple' architectural change, moving to microservices.

~~~
BerislavLopac
> They stopped talking to each other because they didn't share any code, no
> issues... they stopped having lunch together, some things got way worse in
> the organisation, all sparked by a 'simple' architectural change, moving to
> microservices.

Honestly, this sounds like an improvement.

~~~
badfrog
How is people not talking to each other an improvement?

~~~
BerislavLopac
Oh, it is definitely a problem from a social/cultural standpoint. But from the
point of software architecture (and, therefore, development organisation), too
much (or even any) communication between teams working on discreet, separate
units can become detrimental.

It is perfectly fine that the people communicate, and even helping each other
to improve tech skill should be encouraged; however, decisions about their
respective products should be contained within each team, with clearly defined
interfaces and usage documentation.

~~~
badfrog
> decisions about their respective products should be contained within each
> team, with clearly defined interfaces and usage documentation.

In order to make those decisions and define the interfaces, you need to know a
lot about how your software is going to be used. That will be much easier if
you have good communication with the other teams and understand their goals
and motivations.

~~~
BerislavLopac
I disagree. In my experience, direct coordination on interfaces tends to
create unnecessary special cases (hey, can you add this field to your API,
just for us?) which add complexity and make maintenance more difficult down
the line.

The main advantage of distributed system, and particularly microservices, is
the ability to have each system completely independent: individual components
can be written in different languages, running on different platforms and use
completely independent internal components. Basically, it is just like using
an external library, component or service: the authors provide documentation
and interfaces, and you should be able to expect it to behave as advertised.

~~~
badfrog
> In my experience, direct coordination on interfaces tends to create
> unnecessary special cases (hey, can you add this field to your API, just for
> us?) which add complexity and make maintenance more difficult down the line.

If you just implement all requests directly, you're for sure going to end up
with a horrible interface. You should approach API design the same way that
UX/PM approaches UI and feature design: take the time to understand _why_ your
partner teams/engineers are requesting certain changes and figure out the
right interface to address their problems.

~~~
BerislavLopac
Oh absolutely, but direct communication between teams is not the right method
for that. Which is why every product, no matter how "micro" a service, needs
to have a dedicated product owner/manager who is responsible for defining
functional requirements.

Edit: I just noticed that "PM" in parent comment. Basically, product managers
are not just for UI and customer-facing products.

~~~
jrochkind1
The idea that "functional requirements" can be decided independently from
"technical architecture", as opposed to in interplay with each other, is
exactly the opposite of what I've learned from good experiences with
"agility", although some people seem to somehow take it that the opposite.

But yes, you can never just do what "the users" ask for. The best way to
understand what they need is to be in conversation with them. Silo'ing
everyone up isn't going to make the overall product -- which has to be
composed of all these sub-components working well with each other -- any
better.

~~~
BerislavLopac
> The idea that "functional requirements" can be decided independently from
> "technical architecture"

Oh it absolutely can; it's just that it usually is not a good idea. But I'm
not talking about the _process_ of reaching that decision, I'm talking about
the _responsibility_ to reach them. Functional and technical decisions are
separate, but in most cases should be defined in conjunction.

> Silo'ing everyone up isn't going to make the overall product

This is true for certain types of product, and less so for others; you need to
clearly understand what type of product you're building, and be ready to adapt
as it changes (or you have a better understanding of it). In a nutshell, the
more compartmentalised a product is, the isolation between the teams becomes
more beneficial. Which brings us full circle back to Conway's Law.

~~~
jrochkind1
> But I'm not talking about the process of reaching that decision, I'm talking
> about the responsibility to reach them

Of course you're talking about the process. Your claims that "direct
communication between teams is not the right method for that," and
"communication between teams working on discreet, separate units can become
detrimental," for instance, are about process.

I don't think this conversation is going anywhere, but from my experience,
lack of communication between teams working on discrete, separate units (that
are expected to be composed to provide business value), can become
detrimental. And that's about process.

------
wffurr
Hyrum's law is highly relevant to anyone who makes software libraries.

"With a sufficient number of users of an API, it does not matter what you
promise in the contract: all observable behaviors of your system will be
depended on by somebody."

I.e. any internal implementation details that leak as behavior of the API
become part of the API. Cf. Microsoft's famous "bug for bug" API compatibility
through versions of Windows.

Http://www.hyrumslaw.com

~~~
mlthoughts2018
They might become part of the API in a superficial sense, but if you broadcast
clearly that undocumented behaviors are subject to change, then users can
decide if they want to accept that risk and won’t have a valid complaint if
they want the not-covered-by-the-contractual-API preserved or become surprised
by a change.

~~~
badfrog
> if you broadcast clearly that undocumented behaviors are subject to change,
> then users can decide if they want to accept that risk

That sounds nice in theory, but doesn't really work in practice. If you're
building infra and a core piece of your company's product relies on these
undocumented behaviors, you can't just change the behavior and shrug your
shoulders when the whole product breaks. Similar if you're providing an
external API to users/customers, you can't just break their stuff without
worrying about it.

~~~
cm2187
I'd add, if the API is meant to implement a protocol but doesn't implement it
quite correctly, you may object to the misimplementation, but if your code has
to work with the implementation, you have to adapt to their bug. It's not even
a matter of undocumented behavior.

Experienced recently as a consumer of an API when letsencrypt made a breaking
change to implement the protocol correctly. Broke my code which relied on
their original incorrect implementation.

------
turingbook
There are some better lists for laws of Software Development:

[1] [http://www.globalnerdy.com/2007/07/18/laws-of-software-
devel...](http://www.globalnerdy.com/2007/07/18/laws-of-software-development/)

[2] [https://exceptionnotfound.net/fundamental-laws-of-
software-d...](https://exceptionnotfound.net/fundamental-laws-of-software-
development/) (not so good but with solid discussions from HN
[https://news.ycombinator.com/item?id=11574715](https://news.ycombinator.com/item?id=11574715)
)

And more:

[3] [https://embeddedartistry.com/blog/2018/8/13/timeless-laws-
so...](https://embeddedartistry.com/blog/2018/8/13/timeless-laws-software-
development) (a whole book titled _Timeless Laws of Software Development_ )

[4] [https://www.red-gate.com/simple-talk/opinion/opinion-
pieces/...](https://www.red-gate.com/simple-talk/opinion/opinion-pieces/some-
laws-of-software-development/)

[5] [https://www.netobjectives.com/blogs/some-laws-software-
devel...](https://www.netobjectives.com/blogs/some-laws-software-development)

[6]
[http://www.methodsandtools.com/archive/softwarelaws.php](http://www.methodsandtools.com/archive/softwarelaws.php)

------
tynpeddler
It's very common for Conway's law to be regarded as some kind of warning, as
if it's something to be "defended" against. It's not. Conway's law is the most
basic and important principle to creating software at scale. A better way of
stating Conway's law is that if you want to design a large, sophisticated
system, the first step is to design the organization that will implement the
system.

Organizations that are too isolated will tend to create monoliths.
Organizations that are too connected and too flat will tend to create
sprawling spaghetti systems. These two cases are not mutually exclusive. You
can have sprawling spaghetti monoliths. This is also one of the dangers to
having one team work on several microservices; those microservices will tend
to intermingle in inappropriately complex ways. Boundaries are critical to
system health, and boundaries can be tuned by organizing people. Don't worry
about Conway's law, leverage it.

------
BerislavLopac
There is an error in the ninety-ninety rule, which should be stated as:

    
    
        The first 90% of the code takes the first 90% of the time. The remaining 10% takes the other 90% of the time.

~~~
rubinelli
A personal rule of thumb I derived from the ninety-ninety rule is this:
"Before starting a project, ask yourself if you would still do it if you knew
it would cost twice as much and take twice as long as you expect. Because it
probably will."

~~~
lolive
Twice is a (reasonable) minimum.

~~~
Deganta
I see the 90% rule as a recursive function: First we get 90% of the whole work
in the first iteration, then 90% of the remaining code (now we are 99%
complete), then 99.9% and so on.

The iteration is stopped when the software has enough features and an
acceptable level of bugs to be considered complete. What complete is depends
entirely on the field of the software. For a proof of concept software we can
stop after the first iteration, but for a safety critical software we might
need 3, 4, or even more itarions.

~~~
squirrelicus
I like this, it rings true in my experience.

------
throwaway2016a
Murphy's Law has electrical engineering roots. I have a fun anecdote.[0] My
wife is electromechanical and I'm computer science so we would work on
projects together since we make a good team. I remember in college I was
working with my wife on one of her projects and we were using force
transducers. The damn things kept breaking at the worst times so we kept
calling it Murphy's Law. After a while we looked it up. Turns out Murphy was
working with transducers when he coined the phrase [1]. So I have this little
back pocket anecdote about the time I got to use Murphy's Law in the original
context. Which I can bring out in times just like this.

[0] I think it is fun. Your milage may vary.

[1]
[https://en.wikipedia.org/wiki/Murphy%27s_law](https://en.wikipedia.org/wiki/Murphy%27s_law)

------
hajile
Everyone always conveniently forgets Price's Law (derived from Lotka's Law) It
states that 50% of work is done by the square root of the number of employees.

Interestingly, Price's law seems to indicate 10x developers exist because if
you have 100 employees, then 10 of them do half of all the work.

This idea is particularly critical when it comes to things like layoffs. If
they get scared and leave or they are let go for whatever reason, the business
disproportionately suffers. Some economists believe that this brain drain has
been a primary cause in the death spiral of some large companies.

~~~
firethief
> Interestingly, Price's law seems to indicate 10x developers exist because if
> you have 100 employees, then 10 of them do half of all the work.

Or 0.1x developers exist...

~~~
hajile
You have to completely abandon reality to get rid of the idea. If 60 of the
remaining 90 devs did NOTHING, those 10 devs would still be 2.5x better than
the rest.

To take things further, make a bell curve chart. Put 50% of the area under the
top 10%. Now, divide up the rest however you'd like. The only way to make this
happen is for a huge percent to not only contribute zero, but to be actively
hurting development to an extreme degree.

I have never found a 100 person company where 60% or more of the company was
contributing absolutely nothing. I have never seen a company where a large
number of people were actively harming the company and the company survived.

~~~
firethief
My understanding of your model must be inaccurate somehow. Here's what I think
I'm hearing:

\- the distribution of productivity of devs in an organization of N * N devs
can be approximated as: N devs who are "Nx", and the rest of the devs are "1x"
(Price's Law, assuming a binary distribution for simplicity)

\- the value of "x" is constant for all sizes of organization (if it were
relative "some are 0.1x" would be a change of units, not an abandonment of
reality)

This would yield the extremely surprising result that the total dev production
of an organization scales quadratically with the number of devs, so what am I
misunderstanding?

~~~
hajile
It actually looks more like an exponential curve with 50% of the area under
the curve fitting in the last few devs. If we normalized the "flat" side of
the curve to be a 1x dev, the we probably have 80 1x devs, 5 2-3x devs, 5 4-6x
devs and 10 8-9x devs.

Rather than quadratic scaling, we're dealing with scaling by root. This
actually meshes very well with the "mythical man month"

If we almost double from 100 devs to 196 devs, we only go from 10 to 14 devs
doing half the work.

We've already accepted that 10 devs were doing half the work of 100 devs.
We've also accepted that those devs must be giving it their all. So, doubling
the devs, but only getting 4 new people to fill the doubled top 50%. Either we
have some new 20x devs or the actual amount of work hasn't increased at the
same rate.

I would still say that is probably incorrect though. The "mythic man month"
doesn't apply to total work done -- only to total _useful_ work done. As the
social complexities increase, the ratio of other work decreases, but the those
top developers will still have to carry both increases (to at least some
degree) in order to still be doing half the work.

I suspect that as the social overhead increases, you should see three
interesting cases. Those who can deal with the social overhead more quickly,
so they have more real work time to compensate for being slower at it
(potentially bumping a 5x dev with better social strategies higher). You could
see the opposite where a 10x dev simply loses all their time in meetings. You
could also see where a 1x dev with better social strategies handles most of a
10x devs social workload so that dev can instead focus on coding (it's rare,
but I've worked on teams with 1-2 devs who did little except keep the team
productive by fending off the bureaucracy).

------
gumby
I have always liked Postel's law (and Jon -- what a great human being he was)
but I no longer like it as I used to.

The reason it's a really great idea is that it says you should engineer in
favor of resilience, which is an important form of robustness. And at the same
time, "strict in what you send" means "don't cause trouble for others.

However there are cases where "fail early" is more likely to be the right
thing. Here are a few:

1 - Backward compatibility can bite you in the leg. For example, USB Type C
(which I love!) can support very high transfer rates but when it can't it will
silently fall back. So you could have a 40 Gbps drive connected to a 40 Gbps
port on a computer via a cable that only supports USB 2 speeds. It will "work"
but maybe not as intended. Is this good, or should it have failed to work (or
alerted the user to make a choice) so that the user can go find a better
cable?

2 - DWIM is inherently unstable. For users that might not be bad (they can see
the result and retry) or terrible ("crap, I didn't mean to destroy the whole
filesystem").

I see these problems all the time in our own code base where someone generates
some invalidly-formatted traffic which is interpreted one way by their code
and a different way by someone else's. Our system is written in at least four
languages. We'd be better off being more strict, but some of the languages
(Python, Javascript) are liberal in both what they accept _and_ generate.

This aphorism/law was written for the network back when we wrote all the
protocol handlers by hand. Now we have so many structured tools and layered
protocols this is much less necessary.

~~~
jacques_chester
"The Harmful Consequences of the Robustness Principle" is a good read:
[https://tools.ietf.org/html/draft-iab-protocol-
maintenance-0...](https://tools.ietf.org/html/draft-iab-protocol-
maintenance-01)

------
davidkuhta
I feel like Fonzie's Law would be a worthwhile inclusion: "The best way to get
the right answer on the internet is not to ask a question; it's to post the
wrong answer."

~~~
aitchnyu
I refuse to fall for that bait.

~~~
davidkuhta
Remember that all is opinion. For what was said by the Cynic Monimus is
manifest: and manifest too is the use of what was said, if a man receives what
may be got out of it as far as it is true.

------
gpvos
Zawinski's law of software envelopment:

 _Every program attempts to expand until it can read mail. Those programs
which cannot so expand are replaced by ones which can._

~~~
ryandrake
The modern angle on this rule is that "Every program eventually adds text
chat, and they're all incompatible with each other."

~~~
wffurr
Isn't mail just another form of text chat?

~~~
mcv
Yes. That makes it the more general form of the original law.

~~~
ryandrake
The _incompatibility_ is the key feature that differentiates modern text chat
from those inferior mail applications.

------
kgwgk

      Some people, when confronted with a problem,
      think “I know, I'll use regular expressions.”   
      Now they have two problems.
    

(Originally with "sed" instead of "regular expressions")

~~~
joeblau

        Some programmers, when confronted with a problem, think 
        "I know, I'll solve it with threads!"
        have Now problems. two they

~~~
BerislavLopac
There are only two hard problems in distributed systems:

    
    
        2. Exactly-once delivery
        1. Guaranteed order of messages
        2. Exactly-once delivery
    

Source:
[https://twitter.com/mathiasverraes/status/632260618599403520](https://twitter.com/mathiasverraes/status/632260618599403520)

~~~
noir_lord
I like.

> There are only two problems in computer science, naming things, cache
> invalidation and off by one errors.

I've got to say modern languages with foreach() have been amazing (makes me
feel old when I consider a 20 year old widely used language 'modern').

------
jatsign
Don't forget Atwood's law:

"Any application that can be written in JavaScript, will eventually be written
in JavaScript."

~~~
BerislavLopac
Or _Greenspun 's Tenth Rule of Programming_:

    
    
        Any sufficiently complicated C or Fortran program contains an ad-hoc,
        informally-specified, bug-ridden, slow implementation of half of CommonLisp.

~~~
neilk
I encountered a literal version of this the other day.

I’ve been looking at some disused AI systems, which were all written in Lisp
back in the day.

In an attempt to remain relevant, at one point in the early 2000s someone
tried porting one of them to Java. By first writing a Lisp interpreter in
early 2000s Java. So the system had all the old dynamic Lisp programs as giant
strings, embedded in a fugly static class hierarchy.

------
billfruit
This is a list of rather, generic catch phrases. I think the article isn't
worth the time, surprised to find it at the top of HN.

~~~
Sahhaese
Yes, this doesn't strike me as good quality content at all. But it's a large
list which lets everyone pick something and chip in which drives engagement.

If this sort of content is the sort that this community increasingly selects
for then it is perhaps time to look for fresh pastures. ( I don't however know
if this indicative of HN's current community or just an 'accident' \- I'm sure
there have always been examples of poor quality near the top at times).

~~~
billfruit
I think not just this one, a few of the top ones are of similar vein at the
moment. Seems a bit off-colour, as usually the quality of content was rather
consistent.

------
sorahn
There is an entire poster of funny 'laws of computing' that was created in
1980 by Kenneth Grooms. It's pretty amazing how many of these are completely
relevant 40 years later...

It's hard to find the original piece of art, but my uncle had this hanging in
his office for a long time, and now it's hanging in mine.

I transcribed it in a gist so I had access to them for copy/paste.

[https://gist.github.com/sorahn/905f67acf00d6f2aa69e74a39de65...](https://gist.github.com/sorahn/905f67acf00d6f2aa69e74a39de65941)

(Those pictures were from an ebay auction before I got the actual piece)

~~~
dllthomas
> program complexity grows until it exceeds the capability of the programmer
> to maintain it.

... then it grows even faster.

------
jrochkind1
Postel's law, "be conservative in what you send, be liberal in what you
accept"is definitely not "a uniter"!

[https://tools.ietf.org/html/draft-thomson-postel-was-
wrong-0...](https://tools.ietf.org/html/draft-thomson-postel-was-wrong-00)

~~~
aeternus
Yes, especially when you consider Hyrum's law.

------
JackFr
Quick and dirty is rarely quick and always dirty.

(Don't know if it has a name)

~~~
tabtab
You can have quick-and-dirty for initial release, but it's rarely practical
from a maintenance perspective.

A related rule: Design software for maintenance, not initial roll-out, because
maintenance is where most of the cost will likely be.

An exception may be a start-up where being first to market is of utmost
importance.

Other rules:

Don't repeat yourself: factor out redundancy. However, redundancy is usually
better than the wrong abstraction, which often happens because the future is
harder to predict than most realize.

And Yagni: You Ain't Gonna Need It: Don't add features you don't yet need.
However, make the design with an eye on likely needs. For example, if there's
an 80% probability of a need for Feature X, make your code "friendly" to X if
it's not much change versus no preparation. Maybe there's a more succinct way
to say this.

------
_ah
1\. All software can be simplified. 2\. All software has bugs. Therefore, all
software can ultimately be simplified down to a single line that doesn't work.

------
dgacmu
Page author, if you read this: Fred Brooks last name has an s. (Brooks, not
Brook.) It should be Brooks' law.

~~~
xpil
Wouldn't it be "Brooks's" rather than "Brooks'"?

From what I know, the "*s'" thing works mostly for plural nouns. For singular,
it only applies to classical & religious names ending with "s" ("Jesus'",
"Archimedes'" etc).

I am not an English native so I may be completely off. Feel free to rage :)

~~~
pier25
Brooks' ?

~~~
xpil
This is exactly what I am trying to establish to improve my bumpy English. My
best guess is that the correct form is "Brooks's" because (1) "Brooks" is a
singular noun ending with an "s" and (2) it is not a classic neither religious
name. If you claim it should be "Brooks'" I am ok with this as long as you
give me a sensible explanation.

~~~
yathern
There's not exactly a consensus these days on what is correct. Either is
valid, but I generally prefer _Brooks' Law_ to _Brooks's Law_ since it looks
more clean. Of course, Brook's Law is incorrect, as there is no "Brook"

Here's an example of the lack of consensus:

Either is acceptable:
[https://data.grammarbook.com/blog/apostrophes/apostrophes-
wi...](https://data.grammarbook.com/blog/apostrophes/apostrophes-with-names-
ending-in-s-ch-or-z/)
[https://owl.purdue.edu/owl/general_writing/punctuation/apost...](https://owl.purdue.edu/owl/general_writing/punctuation/apostrophe_introduction.html)

Chicago vs AP style: [https://apvschicago.com/2011/06/apostrophe-s-vs-
apostrophe-f...](https://apvschicago.com/2011/06/apostrophe-s-vs-apostrophe-
forming.html)

APA style suggests appending the extra 's':
[https://blog.apastyle.org/apastyle/2013/06/forming-
possessiv...](https://blog.apastyle.org/apastyle/2013/06/forming-possessives-
with-singular-names.html)

~~~
xpil
So, the usual clusterf*k of opinions instead of a clear spec. People should be
speaking SQL.

Thanks for the links. Plenty of educational value there!

~~~
dllthomas
> So, the usual clusterf*k of opinions instead of a clear spec. People should
> be speaking SQL.

Because that would be an improvement, or not much of a change?

~~~
xpil
Sorry, forgot to set the Sarcasm New Roman font again!

~~~
dllthomas
Honestly, it's more amusing for the ambiguity :)

------
_hardwaregeek
I like Wiggin's Law (found in [My Heroku
Values]([https://gist.github.com/adamwiggins/5687294)](https://gist.github.com/adamwiggins/5687294\))):
If it's hard, cut scope. I'm working on a compiler for my new language and
sometimes I get caught up in the sheer amount of work involved in implementing
a new language. I mean, I have to write a typechecker, a code generator, a
runtime (including GC), a stdlib, etc. But instead of just getting
overwhelmed, I'm trying to cut scope and just focus on getting a small part
working. Even if the code is terrible, even if it's limited in functionality,
I just need to get something working.

------
ddebernardy
I'm not sure who they should be named after, but I'd like to suggest two more:

> Redundancy is bad, but dependencies are worse.

[https://yosefk.com/blog/redundancy-vs-dependencies-which-
is-...](https://yosefk.com/blog/redundancy-vs-dependencies-which-is-
worse.html)

> Always code as if the guy who ends up maintaining your code will be a
> violent psychopath who knows where you live.

[https://stackoverflow.com/questions/876089/who-wrote-this-
pr...](https://stackoverflow.com/questions/876089/who-wrote-this-programing-
saying-always-code-as-if-the-guy-who-ends-up-maintai)

------
JackFr
______ is like violence, if it's not solving your problem, you're not using
enough of it.

(I first heard that for XML, and since have heard it for others. Was very
funny for XML though. I also know its not really a law.)

~~~
sagartewari01
Communication?

------
composer
ReRe's Law of _Re_ petition and _Re_ dundancy [2] seems appropriate here:

    
    
      A programmer can accurately estimate the schedule for only the repeated and the redundant. Yet,
    
      A programmer's job is to automate the repeated and the redundant. Thus,
    
      A programmer delivering to an estimated or predictable schedule is...
    
      Not doing their job (or is redundant).
    

[2]
[https://news.ycombinator.com/item?id=12150889](https://news.ycombinator.com/item?id=12150889)

------
hokumguru
Last week I was trying to remember a term for when a programmer designs a
system so generic that it becomes a prototype of itself. For the life of me, I
can't remember - anyone here on HN know?

~~~
Sahhaese
That's the inner platform effect [https://en.wikipedia.org/wiki/Inner-
platform_effect](https://en.wikipedia.org/wiki/Inner-platform_effect)

------
sneak
Postel’s law is lately considered harmful, and Linus’ law has been disproven a
lot of times (eg goto fail, but also in the linux kernel).

------
coleca
Biggest one that is missing from the list in my opinion is Vogels Law:

"Everything breaks, all the time" \- Dr. Werner Vogels CTO Amazon.com

~~~
BerislavLopac
Or, alternatively, Norton's law: "Everything is broken."
[https://medium.com/message/everything-is-
broken-81e5f33a24e1](https://medium.com/message/everything-is-
broken-81e5f33a24e1)

------
Const-me
Am I the only person here thinking that many of them are just anecdotes, or
are deprecated?

> Given enough eyeballs, all bugs are shallow.

Just count of viewers doesn't help. The owners of these eyeballs need both
motivation to look for these bugs, and expertise to find them.

> The power of computers per unit cost doubles every 24 month.

Slowed down years ago.

> Software gets slower faster than hardware gets faster.

It doesn't. If you'll benchmark new software on new PCs versus old software on
old PCs processing same amount of data, you'll find out the new one is faster
by orders of magnitude.

Input to screen latency might be 1-2 frames slower, because USB, GPU, HDMI,
LCD indeed have more latency compared to COM ports, VGA, and CRT. But
throughput is way better.

> Premature optimization is the root of all evil.

Was probably true while the power of computers doubled every 24 month. It
doesn't any more.

------
CalChris
Joy’s Law: most of the smartest people work for someone else.

~~~
carlmr
If your company doesn't hire more than 50% of all developers in the world at
random, then this is probably true.

------
klodolph
I would say Postel's Law, "Be conservative in what you send, be liberal in
what you accept," should be tempered a bit. Sometimes it makes sense to be a
bit more liberal with what you send (to make sure that consumers can handle
errors) and more strict with what you accept (to make sure that consumers
aren't relying too much on undocumented behavior).

For example, if you have a service with near 100% uptime, any other service
which relies on it may not be able to handle errors or unavailability.
Introducing errors in a controlled way can help make the dependencies more
reliable.

As another example, being liberal about what you accept can sometimes result
in security flaws, since different systems might be interpreting a message
differently. Rejecting malformed input can be a beautiful thing.

~~~
marcosdumay
Postel's Law is about handling standard protocols.

If you control all the clients and servers using a protocol, it does not apply
to you. You're better being as strict as possible.

~~~
klodolph
I know what Postel’s law is about, the argument stands. Postel said that in
1989 and our thinking about protocols has changed a bit since then. If you’re
implementing a standard protocol like HTTP or TLS, and you are liberal in what
you accept, this can cause security problems or other unintended behavior. For
example, consider a proxy or gateway that interprets a request differently
from an authoritative server. Suppose a nonstandard request is handled
differently by each component. Ideally, one of the responses is, "this request
is malformed, reject it". If each component handles the same request
differently but without rejecting the request, you are quite possibly in a
world of hurt.

More concrete example: suppose that an incoming HTTP request contains CRLFLF.
To the proxy, “liberal in what you accept” might mean that this is interpreted
as CRLF CRLF, which is the end of the request header. To the authoritative
server, perhaps the second LF is silently discarded and ignored. Voilà:
instant cache poisoning attack.

------
vram22
Murphy's Law:

[https://en.wikipedia.org/wiki/Murphy%27s_law](https://en.wikipedia.org/wiki/Murphy%27s_law)

and

Muphry's Law:

[https://en.wikipedia.org/wiki/Muphry%27s_law](https://en.wikipedia.org/wiki/Muphry%27s_law)

------
sporkland
I would add Greenspun's tenth rule (law)[1]:

 _Any sufficiently complicated C or Fortran program contains an ad-hoc,
informally-specified, bug-ridden, slow implementation of half of Common Lisp._

And Beneford's law of controversy [2], which I see around monorepo vs
polyrepo, language choices, tabs vs spaces, etc:

 _Passion is inversely proportional to the amount of real information
available._

[1]
[https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule](https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule)
[2]
[https://en.wikipedia.org/wiki/Gregory_Benford](https://en.wikipedia.org/wiki/Gregory_Benford)

------
theandrewbailey
Finagle's law: "Anything that can go wrong, will- _at the worst possible
moment_."

[https://en.wikipedia.org/wiki/Finagle%27s_law](https://en.wikipedia.org/wiki/Finagle%27s_law)

------
mojuba
Oh please, not the Knuth's "principle" again. Optimization is a skill, it's
not evil. A skilled engineer can build sufficiently good systems without
wasting much extra time on optimizations.

~~~
lojack
His full quote is a little less prone to abuse

> Programmers waste enormous amounts of time thinking about, or worrying
> about, the speed of noncritical parts of their programs, and these attempts
> at efficiency actually have a strong negative impact when debugging and
> maintenance are considered. We should forget about small efficiencies, say
> about 97% of the time: premature optimization is the root of all evil. Yet
> we should not pass up our opportunities in that critical 3%.

Premature optimization isn't bad, premature micro optimization is bad. You
should also be thinking about optimization that results in better architecture
and architectural decisions that make it easier to optimize in the future.

~~~
mr_tristan
Yeah, I get frustrated that few people actually post the full quote, because,
with the context, it means something completely different to young ears.

The full quote makes me think: "You should identify the critical paths of your
system early." The shortened quote makes me think: "Deal with performance
later."

Pretty big difference in meaning.

~~~
hermitdev
Personally, I think it's more about balancing trade offs. You need to have
some semblance of target performance needs. Bad architecture can be hard to
overcome later.

Most decisions are small, but can lead to compounding effects. Personally, I
think one should also avoid premature pessimissation as well. No one in their
right mind would use bubble sort over quick sort, for instance (not saying
quick sort is the best algorithm, but it's better than bubble sort). One pet
peeve I have in C++ is when I see people initializing a std::string with a
literal empty string instead of using the default constructor. The default
constructor is usually a memset or initializing 3 pointers. Initializing with
an empty literal involves a call to strlen, malloc and strcpy. I've yet to see
a compiler optimize this. May not seem like a big deal, but considering one of
the most frequently used data types is a string, it adds up a lot. Most of the
applications I've worked on show std::string methods and constructors as
hotspots when profiled (back office financial systems).

I agree one should avoid premature micro-optimization, but that you can also
avoid premature pessimissation.

------
Balgair
DevOps Borat is a wonderful, if hardly understandable, account of related
'rules' and aphorisms. Sadly, it is no longer updated:
[https://twitter.com/devops_borat?lang=en](https://twitter.com/devops_borat?lang=en)

Some choice tweets:

Cloud is not ready for enterprise if is not integrate with single server
running Active Directory.

Fact of useless: 90% of data is not exist 2 year ago. Fact of use: 90% of data
is useless.

In devops we have best minds of generation are deal with flaky VPN client.

For increase revenue we are introduce paywall between dev team and ops team.

------
Cpoll
Kind of unrelated to the article, but does the Moore's law joke about the cat
constant make any sense? The `-C` constant should be on both sides of the
equation (since the formula is future computation _relative to_ current
computation), and thus cancel out. As it stands, the equation doesn't make
sense when 'Number of years' is zero, and is inconsistent between calculating
twice in 2-year intervals and calculating once with a 4-year interval (as an
example).

------
Dangeranger
Wouldn't the following be an embrace of Conway's Law rather than a defense
against it?

> It is much better, and more and more implemented as such, to deploy teams
> around a bounded context. Architectures such as microservices structure
> their teams around service boundaries rather than siloed technical
> architecture partitions.

> So, structure teams to look like your target architecture, and it will be
> easier to achieve it. That's how you defend against Conways's law.

------
mamon
Interesting read, although at this point the "Given enough eyeballs, all bugs
are shallow" should be regardes as afallacy, not a law, because:

\- noone reads open source code

\- those who read do not understand it

\- those who understand don't file bug reports.

\- those who file bug reports file them for their own issues coming from
misunderstanding/misapplication of the software, not actual bugs.

------
andrelaszlo
I expected to see Lehman's laws[0] in there too, but maybe they are not famous
enough. Maybe they don't deserve to be, but I think they are relevant
observations.

[0]: [http://wiki.c2.com/?LehmansLaws](http://wiki.c2.com/?LehmansLaws)

------
worik
* A organisation's requirements for data processing is a function of the organisations data processing capabilities and always greater

* All software has bugs, no software is inefficient

* A programmer's work is never done

------
dahart
Moore’s law is dead!

Also, I think Murphy’s law should be removed, it’s less true than the other
laws here.

I read a fantastic article many years ago in the Atlantic where the author was
analyzing and deconstructing an airplane crash, and in it was a paragraph
about how Murphy’s law is completely backwards, and in reality if things can
go right, then they will. Things will almost always go right unless there’s no
possible way they can, in other words only the extremely rare alignment of
multiple mistakes causes catastrophes. Can’t remember if the author had a name
for the alternative Murphy’s law, but I believe it, especially in software. We
get away with crappy software and bugs & mistakes all over the place.

~~~
0_gravitas
I think people interpret Murphy's law incorrectly most of the time.

We can extrapolate from "Anything bad that can, happen will happen", and get
the statement: "If something can physically happen, given enough time, it
_will_ eventually happen."

I like to think its sort of a very tangential sister idea of the mediocrity
theory.

~~~
dahart
I'm not sure I understand what you think is incorrect; your explanation seems
to align with the common interpretation.

Here's the article I was thinking of. Totally worth the read, aside from
discussion of Murphy's Law...

[https://www.theatlantic.com/magazine/archive/1998/03/the-
les...](https://www.theatlantic.com/magazine/archive/1998/03/the-lessons-of-
valujet-592/306534/)

"Keep in mind that it is also competitive, and that if one of its purposes is
to make money, the other is to move the public through thin air cheaply and at
high speed. Safety is never first, and it never will be, but for obvious
reasons it is a necessary part of the venture. Risk is a part too, but on the
everyday level of practical compromises and small decisions—the building
blocks of this ambitious enterprise—the view of risk is usually obscured. The
people involved do not consciously trade safety for money or convenience, but
they inevitably make a lot of bad little choices. They get away with those
choices because, as Perrow says, Murphy's Law is wrong—what can go wrong
usually goes right. But then one day a few of the bad little choices come
together, and circumstances take an airplane down. Who, then, is really to
blame?"

Of course, regardless of which way you interpret Murphy's law, the law itself
and this alternative are both hyperbolic exaggerations. The main question is
more of which way of looking at it is more useful.

In terms of thinking about safety, it seems like both points of view have
something important to say about why paying attention to unlikely events is
critical.

~~~
0_gravitas
I suppose what I generally mean is that most of the people that I've talked to
only consider it within the scope of "what can go _wrong_ ", and seem to never
consider the more general statement. I'm certainly not claiming to be the
first person to think such a way, if that's the impression I gave off.

Murphy's law is a favorite of mine because it's the perfect driving board for
conversations about infinite probabilities and aliens and simulation stuff.

~~~
dahart
I guess I still don't know exactly what the more general statement is you're
referring to. Do you mean just that a non-zero probability of a single event
happening equals 100% probability given a large enough sample of events (which
may take a large amount of time)?

I feel like Murphy's law as stated captures that idea adequately. And it's
certainly true if the event probability really is non-zero. Sometimes, though,
we can calculate event probabilities that are apparently non-zero based on
known information, but are zero in reality.

One example in my head is quantum tunneling. Maybe this is along the lines
you're talking about? And this is the way my physics TA described it many
years ago, but caveat I'm not a physicist and I suspect there are some
problems with this analogy. He said you can calculate the probability of an
atom spontaneously appearing on the other side of a solid wall, and you can
calculate the same (less likely) probability of two atoms going together,
therefore there is a non-zero probability that a human can teleport whole
through the wall. The odds are too small to expect to ever see it, but on the
other hand, with the amount of matter in the universe we should expect to see
small scale examples somewhat often, and we don't. There may be unknown
reasons that the probability of an event is zero.

~~~
0_gravitas
It looks like we agree on all points, yes

------
enriquto
does it work for you? The site only shows a beating gray circle.

~~~
mrob
As is often the case with these kinds of abuses of JavaScript, Firefox's
Reader View solves the problem.

~~~
enriquto
It seems to be a strange interaction between css and javascript. Using
umatrix, I can read the text with both css and javascript disabled, but not
with only javascript disabled. This is the first time I encounter this curious
behavior.

------
AzzieElbab
My law: convenience topples correctness. Evidence -programmers have been
proven incapable of quoting Knuth's optimization principle correctly and in
full

------
ComodoHacker
Could someone explain the last one, Norvig's Law:

"Any technology that surpasses 50% penetration will never double again (in any
number of months)."

~~~
jnty
Double 50% is 100%, so assuming your market doesn't significantly grow, it's
impossible to double your market share again.

------
b0rsuk
"Any programmer can be replaced with a finite number of interns" \- Janusz
Filipiak, the biggest shareholder of Comarch, Poland.

------
DGAP
My personal law from working in software consulting is "triple the estimate."

------
MattyRad
The Peter Principle is also referred to as Putt's Law
([https://en.m.wikipedia.org/wiki/Putt%27s_Law_and_the_Success...](https://en.m.wikipedia.org/wiki/Putt%27s_Law_and_the_Successful_Technocrat))
and phrased slightly differently.

~~~
yesenadam
Putt's Law seems totally different.

* Putt's Law: "Technology is dominated by two types of people, those who understand what they do not manage and those who manage what they do not understand."

* Putt's Corollary: "Every technical hierarchy, in time, develops a competence inversion." with incompetence being "flushed out of the lower levels" of a technocratic hierarchy, ensuring that technically competent people remain directly in charge of the actual technology while those without technical competence move into management.

In the Peter model, everyone gets (or tends to get) promoted until they reach
a job they can't do, and they stay there. Thus everyone will (tend to) be
incompetent. In Putt's model, the technically incompetent get promoted, and
those at lower levels are competent.

Putt's does sound more like the way the world works...maybe. Peter's has
always sounding convincing to me, yet the world evidently isn't so bad as
that.

------
dccoolgai
Missing: Goodhearts Law

------
Gipetto
Is there a law about the probability that an article about programming will
reference XKCD?

------
thomasjudge
A partner to the Peter Principle, particularly with respect to managers, is
the Dunning-Kruger effect: "In the field of psychology, the Dunning–Kruger
effect is a cognitive bias in which people of low ability have illusory
superiority and mistakenly assess their cognitive ability as greater than it
is."

