
Gall's Law - mpweiher
https://en.wikipedia.org/wiki/John_Gall_(author)#Gall.27s_law
======
hirundo
> A complex system that works is invariably found to have evolved from a
> simple system that worked. A complex system designed from scratch never
> works and cannot be patched up to make it work. You have to start over with
> a working simple system. – John Gall

While this sounds more like a strong tendency than a law it sure rings true
for my programming career. I've built complex systems from scratch, but they
all started from the universe of discourse of an existing business model. When
writing such systems I've learned to always start from a dead simple version
of it. (I called one such system "Moe" to purposely keep it stupid simple) And
then increment it.

If the creationists are correct, and God did indeed cook up Adam from scratch,
that would be more evidence of the exceptional power of God. What we do know
about actual evolution seems to support Gall's Law.

~~~
Barrin92
>What we do know about actual evolution seems to support Gall's Law.

But human design is not evolution. That Gall's law, which is a form of
incrementalism, rings true for a system without a designer appears to be
almost true by definition, but it is far from obvious for human design.

We have many examples of human architecture that went from "0 to 1". The von
Neumann architecture. The first spaceships, new breakthrough theories in
mathematics or physics. While you can debate whether those systems were built
literally from scratch or not, they sure made qualitative jumps not comparable
to evolution.

And also I would point out that there is a strong survivorship bias in Gall's
law. All sustainable complex systems built up incrementally that are still
around are by definition an example of success. But we don't actively see the
resource cost of incrementalist dead-ends, or the limitations. ("You don't get
to the moon by climbing up trees"), whereas every failure in ambitious design
is actively debated, or even derided.

~~~
infinity0
> The von Neumann architecture

How do you get that? There was decades of prior experience both in computer
science theory as well as practice, especially from the code breakers in WW2.

~~~
daniel-cussen
Decade of experience at best.

An example from 1945 and also from von Neumann is Mergesort, the first O(n
logn) sorting algorithm, and still in use to this day. Von Neumann was known
for inventing things from absolute scratch.

------
stcredzero
Gall's Law maps to building "The simplest thing that could possibly work."

However, one can work at an even smaller granularity. "The simplest thing that
could possibly work," can be made out of stupid things that won't work. To
build the networking for an MMO, I cookbooked a websockets chat demo, then
used it to pass updates from the server to the client. No login mechanism. No
security. No dead reckoning. No fancy synchronization. All those things can be
added later, however. It will be easier to add them to a running system.
(Provided one also refactors to clean up code.)

If you're stuck at how to proceed, feel free to stub out features in a way
which can't possibly work in production, but which will let you compile, run,
and test your system. It's always easier for me to modify a running system
than it is to big-bang an entire system from scratch.

~~~
neonate
Gall says nothing about the _simplest_ thing, only that complex systems grow
out of simple small ones. I mention this not to be a stickler, but to
highlight the interesting possibility that "the simplest thing that could
possibly work" might _not_ actually (or always) be something that a complex
later system can evolve out of. We don't know that. I'm reminded of a comment
by Alan Kay about how you don't want the lowest stratum of a system to be too
simple.

~~~
devmunchies
>Gall says nothing about the simplest thing, only that _SUCCESSFUL_, complex
systems grow out of simple small ones

Fixed. Missing a key word. The whole point of the law is that complex systems
can be built from scratch but will fail. The successful ones grew organically.

~~~
stcredzero
Weapon system designers sometimes observe that an older system on its nth
iteration can be better in the field than the revolutionary new ground-up
design. (Until the latter gets to its nth iteration.)

------
davidu
Related: Second System Effect

[https://en.wikipedia.org/wiki/Second-
system_effect](https://en.wikipedia.org/wiki/Second-system_effect)

As an aside, I'd love the Twitter engineering teams that fixed the fail whale
to one day write about this as it related to fixing Twitter's stability and
performance. It seems they simultaneously made major platform shifts and
managed to fix things. Maybe it was done in small pieces, piece by piece and
perhaps a SOA/Microservices architecture may have enabled that to feel like a
rewrite without being one all at once. I'd love to hear the story. My
understanding is that in addition to platform shifts, the actual engineering
teams cycled out almost completely at least once, too. Really hard changes to
manage through.

~~~
milquetoastaf
I remember seeing the fail whale constantly until around 2010 and honestly, as
an almost daily user of twitter, i don't think I've seen it since and the site
is one of the most stable I visit (via mobile.twitter.com).

Reddit could definitely use a lesson from them

------
cr0sh
I can understand the idea of starting from a simple spot, and
iterating/evolving a system to more complexity.

But where does - or can - security fit into this?

That is, I've consistently found over my years developing, is that when making
a "secure system" \- ie, implementing some kind of permissions-based system on
top of a login/password or other authorization scheme, for access controls and
such...

...doing so after the fact (ie - bandaid-ing) - when the system has already
become complex enough to "suddenly" need it - usually ends up being a very
poor implementation with tons of holes. It also tends to be very grueling to
implement, and lots of refactoring tends to be needed to accommodate the
changes.

A similar kind of issue can also be found in the hardware world, specifically
automated/robotic systems where you need to implement and think about certain
safety measures, cutoffs, big-red-button-full-stop measures. Doing so after
the fact can lead to missed edge cases or other issues that can make the
system less robust and/or less safe - but only in retrospect after something
fails to work from a safety perspective (and it is compounded by the fact that
such a system will have both physical hardware and electronics coupled with
software).

From that standpoint and personal experiences, I've always tried to stress for
new projects that such parts of the system, if needed or thought to be
possibly necessary in the future (which is pretty much almost guaranteed for
all but the most simple examples) - that it should be designed and implemented
"up front" \- as the first part of the project. Get it in place and get it
solidly built, to the best of your knowledge, then move on to the rest of the
project built around that security/safety system(s).

However, such systems can tend to be or become extremely complex in their own
right.

Am I wrong in this assessment? Should I not be advocating for such things for
new designs? How do you balance this "law" with such needs? Stub all the
things?

~~~
AgentME
Gall's Law applies to the security part of a project too. A complex security
subsystem needs to evolve from a simpler security subsystem. Waiting until the
project as a whole is super complex and needs a complex security subsystem
built from scratch is asking for failure.

------
snazz
This may have been one of the strongest reasons for Google Wave’s failure[0],
which was alluded to many times on HN. It was too complex from the start (1M
LoC), did not evolve very fast at all, started with a huge number of
engineers, and was prematurely optimized (but still plagued with performance
issues). As cool as Wave was from the start, Gall’s Law certainly applies to
it.

[0]:
[https://news.ycombinator.com/item?id=3101201](https://news.ycombinator.com/item?id=3101201)

~~~
hhs
That's a good observation. This also makes me wonder if one key reason why Ted
Nelson's Project Xanadu didn't take off was because it was too complex from
the start?

~~~
acdha
It was definitely the case with the Semantic Web: a much greater up-front
investment with only hypothesized pay-offs down the road. Making simple things
hard is rarely going to be a popular choice.

~~~
hn_throwaway_99
IPv6 to a tee.

~~~
cr0sh
Could systemd be described by Gall's Law as well?

~~~
acdha
I would argue yes, that it’s a good success story: it took a welter of init
scripts, supervisors, logging, event-triggered notification systems,
configuration conventions, etc. and provided a single standard mechanism which
is easier to work with than any one of those was on its own much less the
number of combinations most systems have.

------
ylem
Oddly enough, he was my pediatrician. I would never have guessed about his
involvement in systems--though he did help spark my early interest in
astronomy. He was a generous man.

~~~
dullroar
I am a long-time fan, and have owned all three editions of his book (which
started out titled "Systemantics"). Back in the day you ordered it from his
pediatrics office and he sent it in the mail.

------
austincheney
Just to be clear simplicity is the progression towards singularity and
complexity is the progression away from singularity. _Complex_ quite literally
means to _put together_. The terms complex and simple are completely unrelated
to _easy_ or _challenging_.

That said, a task accomplished by a single nasty 5000 line function is three
times more simple than using 3 small 10 line functions. Inheritance means to
extend and is thus inherently complex.

The simplicity of a system is the result of the competing requirements the
system addresses. A system that does one thing only is simple. A system that
does a few things is complex. A system that does many things is more complex.

This can be measured objectively. There is a Github plugin called _Code
Climate_ that rewards a grade to code based upon things it considers
cognitively challenging. Large functions and large files are punished by that
grading metric. I prefer to organize my code with lexical scope because the
structure of code directly reflects the availability of capabilities. That is
more simple but it also tends to result in really large functions that are
trees of child functions. In this sense the Code Climate grading metric
punishes that form of simplicity and instead suggests breaking things apart
into many pieces to be put together. That is a punishment of simplicity in a
very direct way even though the result is clean code that is faster to read.

~~~
taeric
I think it is doing a disservice to push this as an objective measure. Not to
mention, you reduce it to number of things. Some things are more complicated
than others, after all. Regardless of numbers.

Still, I think, from your last, that we agree. Many intended objective
measures actually don't lead to what they are pushing for. I see this in
designs from peers all the time. They index on terms they understand, and miss
the holistic system for it. It is very frustrating.

~~~
austincheney
Complexity is objective, so a numeric measure isn’t good or bad. It’s the
rules upon that number that are subjective.

I am curious about why developers find this frustrating. It is very commonly
frustrating. Developers almost always claim to want simplicity until the terms
are defined, which makes me wonder what they are actually wanting.

~~~
taeric
I contest that it is truly objective. Some complexity measures are objective.
Most merely appear so.

Consider, who wins a sports game is objective and quite simple to define. It
is whoever has the winning score. Now, the facets that go into all of the
plays and other aspects of the game? Those leave the realm of simplicity quite
quickly.

------
smacktoward
Any HN reader who has not read Gall's book _Systemantics_ (whose latest
printing is under the title _The Systems Bible_ :
[https://www.amazon.com/Systems-Bible-Beginners-Guide-
Large/d...](https://www.amazon.com/Systems-Bible-Beginners-Guide-
Large/dp/0961825170)) really ought to rectify that. It's not perfect, but it
will give you lots of things to think about, and Gall's witty writing style
makes it a fun, easy read.

(I pulled my copy out just a few weeks ago so I could quote a different
Gallism that was relevant to an HN discussion:
[https://news.ycombinator.com/item?id=18859680](https://news.ycombinator.com/item?id=18859680))

------
yingw787
I am heavily reminded of OpenStack, which was designed by committee to help
large companies' IT departments compete against AWS (and flopped), and AWS,
which grew out of a flat file storage service (and very much succeeded).

It still amazes me how warty and ugly production code can be sometimes -- yet
it works and puts food on the table. Meanwhile "beautiful" systems like Plan 9
languish on an old wiki somewhere.

------
DanielBMarkham
This is good and rings true. I've taken it a step further with my definition
of "Good Enough Programming" \-- Implementing technology has two primary
goals: it delivers value and you can walk away from it. To fail at either of
these two goals isn’t good enough. [1]

The reason simple systems sometimes involve into successful complex systems is
because development is completely decoupled from business needs. You code it,
you watch to see how useful it is. Complex systems, on the other hand, are
tightly-coupled. You code it, it needs changing, you spend a lot of time
fretting over impedance mismatch, upgrades, patches, and so forth, instead
looking and evaluating true value. In fact the more complex a system is, the
less you're able to evaluate both its current and potential value.

1\. [http://tiny-giant-
books.com/Entry1.html?EntryId=recj67HoP8cK...](http://tiny-giant-
books.com/Entry1.html?EntryId=recj67HoP8cKW5Eso)

------
doe88
_Meta_ : wondering if it exists a _law_ about all the various repertoried
_laws_.

(Nothing negationist on my part but lately I've observed that I'm internally
more critical and usually take all these laws with a grain of salt, I'm
feeling less and less wanting to _generalize_ everything, of course maybe I'm
wrong but, it seems I've caught a kind of _law-fatigue_ , I hope it's nothing
serious :) ).

~~~
Jun8
One candidate would be: “90% of everything of everything is crap”
[https://en.m.wikipedia.org/wiki/Sturgeon%27s_law](https://en.m.wikipedia.org/wiki/Sturgeon%27s_law),
extending to reported laws.

~~~
hhs
Very interesting, this made me think of Pareto Principle and then as I read
down, Wikipedia cites this. Also, I found it neat that Daniel Dennett uses
this as a tool for critical thinking.

------
jondubois
>> A complex system that works is invariably found to have evolved from a
simple system that worked...

Unless the system had an intelligent designer behind it from the beginning. ;p

There is a general truth to the statement but I'm not sure that it's
consistent enough to be a law.

Sometimes things can get overly complex and disfunctional in spite of having
evolved to that point. Just look at the state of front end development today;
TypeScript, WebAssembly, app bundling, Source Mapping, GraphQL, Protocol
Buffers, gRPC... Most of these projects try to make things simpler by adding
complexity; that's not possible. It's just madness and all that complexity
does rear its ugly head from time to time in pretty much every project but
developers just blame themselves instead of realizing that the root problem is
the tooling.

------
abetusk
I've collected a curated, abridged list of empirical laws:

[https://mechaelephant.com/dev/Empirical-
Laws/](https://mechaelephant.com/dev/Empirical-Laws/)

~~~
rocqua
I'd suggest Hanlon's razor:

Don't assume malice when stupidity is a sufficient explanation.

~~~
justtopost
Too often I am bitten by that one. Some people are just nasty creatures. I
think I assume good faith too much.

------
tautology12
Isn't Gall's Law a tautology? Simplicity and Complexity is a catch up game.
Thesis, antithesis and synthesis: Hegel's trascendence concept. First, let
state the following Axiom or mechanism for explanation: To explain something
you have to reduce it to simpler terms. This mechanism stems from how we use
our language: something is complex when one is able to compared it with a
simpler system that is functionally related. So the use of the word complexity
requires a hierarchy, a ladder that connects simplicity with complexity.

An artificial mind could conceive and use a different concept of complexity
for constructing an artificial language. For instance, multiplying two 64 bits
integers is a simple instruction for a computer. In the analogy of the mind as
a neural network we know that some problems cannot be solved with perceptrons.
Who devises the divide line between a simple and a complex system? Our minds
compose the graph of concepts and relations looking for a decomposition from
simple to complex. But the definition of something simple is completely relate
to the context in which one poses the problem. Building a computer, as the one
I am typing on, was a infinite complex endevour a century ago, today that
endevour is a solved problem.

Finally, in maths simplicity is rescaled: Once you solve a problem and find a
method to solve related problems, then the initial problem becomes trivial and
the pursue of complexity is restarted. The ontology graph in our minds is
always evolving with new methods, concepts and intuitions, and the simplicity
complexity scale measured by the distance in that graph is dynamically
adjusted.

So I believe Gall's Law is a consequence of how we use the word complexity.

Edited: Grammar, spelling, and excuse me for my poor use of English.

------
rmm
i design lots of infrastructure for mining companies (physical mineral
resources mining) and this holds incredibly true.

It's one of the biggest things I push back on when our clients have scoping
meetings that just turn into endless strings of wants/needs.

------
externalreality
There is only one system that doesn't abide by this law. That is the cosmos
including the quantum cosmos. All at once all the complexity in the universe
came into being. Basically its safe to say that "everything" disobeys this
law.

~~~
matt_kantor
Physics is still an unsolved problem. The Standard Model is complex (in that
it has a lot of free parameters), but it's not the final answer, and hopefully
whatever is underneath it is simpler.

I suppose you're talking about cosmology and not physics, but the story there
definitely gets simpler the further back you look in time. The entire universe
was an almost-uniform plasma for 300 thousand years or so, and once it cooled
down it was almost entirely hydrogen atoms for another 100 million years. The
only observable features would have been tiny changes in density from place to
place, and that's where all of the complex stuff we see today (stars,
galaxies, etc) came from.

------
Cardinal_
> First the simple then the composite, such is the methodology of the human
> mind

Voltaire

------
davidkuhta
Anyone know of any interesting counter-points or examples of exceptions?

------
quickthrower2
This is how I learn new tech. If I’m using docker for the first time you can
bet I’m doing an echo “hello world” first then incrementally building up from
there.

------
m3kw9
Is hard, sometimes simple is relative to each person and can be misinterpreted

------
klingonopera
I disagree. Build a system too simple, and you'll end up with technical debt
and can't scale, build it too complex, and it probably won't ever take off.

The first does give you a product that runs the risk of becoming obsolete once
it's deployed, the second never reaches that stage. Survivorship bias is what
makes Gall's Law even a thing.

I believe there's a sweet spot of simplicity/complexity for a successful
system. But on the downside, finding this can lead to whatever scientific name
this [1] phenomena has...

[1] [https://xkcd.com/1908/](https://xkcd.com/1908/)

------
MrBuddyCasino
„Monolith first“ is a specialization of this.

------
galaxyLogic
Makes sense, that is how evolution works

~~~
klingonopera
But evolution is a result, there is no inherent design to it. I doubt you
could apply Gall's Law to evolution, maybe just the first sentence?

~~~
ilovetux
I would argue that there is a design to evolution. At least in the case of
sexual reproduction. True, its not one big, upfront design but rather an
iterative design built up from the collective decisions of the partners.

------
bitwize
Something some prominent Linux programmers -- not naming names but one of them
rhymes with Cuisinart Buttering -- would do well to keep in mind.

~~~
acdha
You mean the people who looked at many existing systems and built a
replacement which solved many long-running nuisances while also being much
easier to use?

I think about these reflexive hate affirmations every time I replace many
hundreds of lines of buggy, hard to extend or troubleshoot SysV init script
with 10 lines of systemd unit which provides significant operational and
reliability improvements. There’s a good reason why most distributions
switched, and why most experienced admins appreciate not having to deal with
certain hassles ever again.

------
jhpriestley
counterexamples: git, paxos, TeX

~~~
timerol
I don't think git counts as a counterexample. The initial commit had 1244
lines, including documentation. It now has over 200,000 lines of C. The
initial commit didn't include branches, remotes, submodules, or many other git
features I use daily. It just had trees, blobs, and changesets. A simple core
of what git eventually became.

~~~
jhpriestley
If distributed version control is not "complex" then what is? Git has mostly
added features on top of a core model that was designed from scratch in one
go.

~~~
stefan_
Nothing in the git core cares about "distributed". It is distributed purely in
the sense that it is copied many times.

A distributed system then that is not cognizant of it's distributedness does
not incur the complexity of that.

~~~
jhpriestley
Lucky break then that git core uses merkle trees and content hashing which
require no coordination and make it easy to reject corrupt data

------
jgalt212
Did someone post this in hopes of starting a dynamic >> static language flame
war?

------
peepX
Anyone find some counter arguments ?

------
amelius
The trick is to use abstraction, so complicated systems become simple systems.

~~~
mpweiher
“Hello leaky abstraction, my old friend,

I've come to talk with you again

Because a vision softly creeping

Left its seeds while I was sleeping…”

