Hacker News new | past | comments | ask | show | jobs | submit login
Gall's Law (wikipedia.org)
329 points by mpweiher on March 20, 2019 | hide | past | favorite | 101 comments



> A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. – John Gall

While this sounds more like a strong tendency than a law it sure rings true for my programming career. I've built complex systems from scratch, but they all started from the universe of discourse of an existing business model. When writing such systems I've learned to always start from a dead simple version of it. (I called one such system "Moe" to purposely keep it stupid simple) And then increment it.

If the creationists are correct, and God did indeed cook up Adam from scratch, that would be more evidence of the exceptional power of God. What we do know about actual evolution seems to support Gall's Law.


>What we do know about actual evolution seems to support Gall's Law.

But human design is not evolution. That Gall's law, which is a form of incrementalism, rings true for a system without a designer appears to be almost true by definition, but it is far from obvious for human design.

We have many examples of human architecture that went from "0 to 1". The von Neumann architecture. The first spaceships, new breakthrough theories in mathematics or physics. While you can debate whether those systems were built literally from scratch or not, they sure made qualitative jumps not comparable to evolution.

And also I would point out that there is a strong survivorship bias in Gall's law. All sustainable complex systems built up incrementally that are still around are by definition an example of success. But we don't actively see the resource cost of incrementalist dead-ends, or the limitations. ("You don't get to the moon by climbing up trees"), whereas every failure in ambitious design is actively debated, or even derided.


> And also I would point out that there is a strong survivorship bias in Gall's law. All sustainable complex systems built up incrementally that are still around are by definition an example of success. But we don't actively see the resource cost of incrementalist dead-ends, or the limitations.

I don't see this as a problem. As expressed, Gall's Law describes necessary conditions for success, not sufficient conditions. Not all evolutions of simple systems will work, but according to the law not starting with a simple system will always result in failure.

There's still undoubtedly plenty to criticize, and I'm sure there's a law somewhere saying that all simple laws are wrong, but this law does express a coherent logical proposition.


> The von Neumann architecture

How do you get that? There was decades of prior experience both in computer science theory as well as practice, especially from the code breakers in WW2.


Decade of experience at best.

An example from 1945 and also from von Neumann is Mergesort, the first O(n logn) sorting algorithm, and still in use to this day. Von Neumann was known for inventing things from absolute scratch.


In no way did spaceships go from 0 to 1. Rocketry and flight, combustion, the list goes on and on.


The fascinating and sometimes exasperating thing about software development is that there are no absolutes. You can find exceptions to any “rule” you care to name.

That said, I’ve found that in the vast majority of cases you’re better off starting with a simple prototype and iteratively adding new features and refactoring than you are trying to build a complex system from scratch.


Von Neumann, seems to me, brilliant and his contributions vastly underappreciated.

But isn't any architecture, by definition, also the creation of a complex system from simple systems?


This made me think of your point, too: "what we do know about actual evolution seems to support Gall's Law."

But I do wonder if this world is one big 'system'? Or is it something else? I am curious if we are limited by our languages in thinking only of 'system'. Do we need to figure out something else (e.g., via science tools and stronger languages) to a get a 'deep' sense of reality? And by this, I draw on 'linguistic relativity'.


Some people have been working on that: https://en.wikipedia.org/wiki/System_of_systems#Research


Given that systems can contain other systems as it is, I don't see what this concept adds that people who work with complex systems (for example climate modelling) wouldn't be dealing with anyway?


It's just an abstraction. Climate scientists know how to deal with their climate system, but they may not be equipped to then relate their data back to a military defense system or water treatment plant. SoS allows a different team to take a set of not-quite-related systems and analyze their interactions, similar to how a climate scientist might analyze climate models.


This is interesting, especially with the points on "Establishment of an effective frame of reference" and "Crafting of a unifying lexicon", thank you. I still wonder if we're limited by our languages and need to consider constructs other than 'system'?


Damn, mind blown. Thanks for this.


Yup we do. It's called quantum field theory - https://www.youtube.com/watch?v=zNVQfWC_evg


I'll check this out, thanks, this gets to the granularity, though I still wonder if our languages, to date, are primitive, and blocking us off from thinking of categories different from 'system'?


What is a system?

"sys·tem noun 1.a set of things working together as parts of a mechanism or an interconnecting network.

"the state railroad system"

2. a set of principles or procedures according to which something is done; an organized scheme or method."

This is the mildy condensed version from wikipedia regarding the definition of system. I know that we classify a lot of things in the world as systems, for example: ecosystems.

It seems to me you are considering that maybe the world doesn't operate from a higher point of view as a "system", or that system is not the correct terminology? I think our definition of system might cover for what the world as we know very well. All the little parts of the world have a purpose. The ants, when they come out of the eggs have an internal rule set it seems. They know what it is they "do". They go right to it and that is all that they do the rest of their existance. Building their nests, protecting their nests, the queen and her role. I suppose the same can be said about most of nature. Its a lot to think about, as the diversity of it is amazing, but it is also fun to observe and ponder.


Exactly, I think you hit at it when you write: "it seems to me you are considering that maybe the world doesn't operate from a higher point of view as a "system", or that system is not the correct terminology?"

Plus, you give a useful example with the species of ants; for them, their humble interactions are relative to their existence. In contrast, for us, maybe the 'system' is the term we're bounded by relative to our existence. It may be the best frame of reference we can grasp. It could be our edge and we can't go further.

If that's the case, I wonder if evolution needs to advance further to get at some sharper-level species. Compared to a human, might a sharper-level species perceive the world operating at a higher point of view other than a 'system'? And I wonder how their languages might operate?


Most definitely primitive. Which is why we need math, musical notation, sign language, programming languages etc etc. They all get created to overcome limitations of spoken language. And they all have their own advantages and disadvantages. And they are all constantly evolving.

Here's a psychologist Wendall Johnson's view explaining why people misunderstand things easily - “Our language is an imperfect instrument created by ancient and ignorant men. It is an animistic language that invites us to talk about stability and constants, about similarities and normal and kinds, about magical transformations, quick cures, simple problems, and final solutions. Yet the world we try to symbolize with this language is a world of process, change, differences, dimensions, functions, relationships, growths, interactions, developing, learning, coping, complexity. And the mismatch of our ever-changing world and our relatively static language forms is part of our problem.”


I agree, the interaction of languages helps us appreciate the world. And that's a useful quote from Wendell Johnson. It reminds me of Wittgenstein's words, "The limits of my language mean the limits of my world".


>What we do know about actual evolution seems to support Gall's Law.

That's because evolution takes place with very small changes in genome. It is an undirected process. Sexual reproduction makes evolution better for certain fitness landscapes since you can basically jump somewhere between two points on a landscape when you have two organisms sexually reproduce, but it is still undirected. This doesn't necessarily show that conscious long term planning is ineffective.


Who is John Gall?

(I'm sorry, I had to!)


Gall's Law maps to building "The simplest thing that could possibly work."

However, one can work at an even smaller granularity. "The simplest thing that could possibly work," can be made out of stupid things that won't work. To build the networking for an MMO, I cookbooked a websockets chat demo, then used it to pass updates from the server to the client. No login mechanism. No security. No dead reckoning. No fancy synchronization. All those things can be added later, however. It will be easier to add them to a running system. (Provided one also refactors to clean up code.)

If you're stuck at how to proceed, feel free to stub out features in a way which can't possibly work in production, but which will let you compile, run, and test your system. It's always easier for me to modify a running system than it is to big-bang an entire system from scratch.


Gall says nothing about the simplest thing, only that complex systems grow out of simple small ones. I mention this not to be a stickler, but to highlight the interesting possibility that "the simplest thing that could possibly work" might not actually (or always) be something that a complex later system can evolve out of. We don't know that. I'm reminded of a comment by Alan Kay about how you don't want the lowest stratum of a system to be too simple.


Gall says nothing about the simplest thing, only that complex systems grow out of simple small ones.

My point is just that the way Gall's Law maps to something like Extreme Programming is through, "the simplest thing that could possibly work." When one is starting out a new project, that is precisely how one applies Gall's Law.

"the simplest thing that could possibly work" might not actually (or always) be something that a complex later system can evolve out of.

Software shouldn't be that limited. In particular, of you are using principles/tactics like DRY and Law of Demeter, you should be able to grow your project into a more complex one. (Or, in preparation for growth, apply those and then refactor.)

I'm reminded of a comment by Alan Kay about how you don't want the lowest stratum of a system to be too simple.

This is probably because the lowest stratum can be the hardest to change. In software, it's even possible to take away the foundations and exchange them! I've worked with people who have done it. (Migrate an app from an Object Database to a relational one, for example.)


>Gall says nothing about the simplest thing, only that _SUCCESSFUL_, complex systems grow out of simple small ones

Fixed. Missing a key word. The whole point of the law is that complex systems can be built from scratch but will fail. The successful ones grew organically.


Weapon system designers sometimes observe that an older system on its nth iteration can be better in the field than the revolutionary new ground-up design. (Until the latter gets to its nth iteration.)


"to highlight the interesting possibility that "the simplest thing that could possibly work" might not actually (or always) be something that a complex later system can evolve out of."

One of my big focuses when I'm designing a system is not to get everything right up front, but to always be looking two or three steps ahead and making sure there is a reasonable path to get to where we need to be. It's one of the ways in which I definitely do not 100% subscribe to the idea of just doing what you need to do right now and consciously avoiding any other planning; if you're not poking your head up and keeping an eye on where the project is mostly likely to head, it's too easy to grind something into the very foundation of the architecture of a system that will make it essentially impossible to get to where you are actually going.

For example, if today, your system doesn't require any networking, then it may be advantageous to do everything as a fully local process with no networking involved, because networking is hard and expensive for your architecture. However, if you peer six months into the future and it's pretty obvious that you're going to have to add networking of some kind to your system, you can often cost-effectively hedge your bets without necessarily going to full-on networking usage by making sure that as you proceed in the initial steps, you tend away from any solutions that will make it impossible to add a networking layer in the future. Often you've got several options that are effectively the same price right now and have effectively the same benefits (never identical, but close enough), but have very different characteristics if you consider where the project is likely to go. You can often take out cheap insurance that pays off big later, and on those occasions where it may not pay off, it wasn't that expensive in the first place, so it's not that big a deal.

For a much more micro example, this is why every non-trivial file you write and every network protocol you specify should have a version number in it. It's virtually free today to put one in, even if you don't need it right now. (And for version 1, almost by definition you don't.) But the odds that you'll need it in the future, and that it'll be much harder to put in, are very good. Not putting a version number in today because you don't need it today is a great way to constrain your future moves.

(I actually still sort of endorse the radical YAGNI approach for young designers. Young designers who don't have a lot of experience tend to cargo cult the more complicated designs they see in other systems without deeply understanding what those are there for. I think there's a lot of advantage to taking a radically simplifying approach to your early designs, and you're likely to produce better work in almost every sense as a result if you do. But as you gain experience with that approach, bear in mind that it is something you will eventually outgrow. Armed with the direct experience of what problems arise and what solutions there are, and, alas, yes, the times you backed yourself into a corner, you'll grow the wisdom needed to peer ahead a bit and get the future course right often enough to make even better decisions. I'm not perfect at it, but I'm better at it than the hard-core XP propaganda seems to believe is possible.)


One of my big focuses when I'm designing a system is not to get everything right up front, but to always be looking two or three steps ahead and making sure there is a reasonable path to get to where we need to be.

This is very key! This goes back to the "driving the car vs. aiming a cannon" analogy from Extreme Programming. Instead of aiming a cannon and hitting the target in one shot, a driver steers moment to moment while thinking about the road ahead and the next turn. (As opposed to driving with no thinking about the next turn and the road ahead, which is just bad driving.)

Young designers who don't have a lot of experience tend to cargo cult the more complicated designs they see in other systems without deeply understanding what those are there for.

Also very key. One shouldn't just imitate the form, but understand the forces that determined the form!


> It's always easier for me to modify a running system than it is to big-bang an entire system from scratch.

Reminds me of the tracer bullets concept from "The Pragmatic Programmer". I've found it really helpful when I'm not sure what to do next.


Tracer code is "The Pragmatic Programmer" approach to applying Gall's Law:

Tracer code is not disposable: you write it for keeps. It contains all the error checking, structuring, documentation, and self-checking that any piece of production code has. It simply is not fully functional. However, once you have achieved an end-to-end connection among the components of your system, you can check how close to the target you are, adjusting if necessary.

What I'm describing is also how one can incrementally get to the Tracer Code. First, write something stupid. Perhaps it's fragile and hardcoded and only sends one particular query and gets the answer back. Then correct the hardcoding and start filling in the missing pieces.


Makes me think of something I say occasionally... "The simplest thing that can possibly work is simpler than the simplest thing you can possibly imagine."


Related: Second System Effect

https://en.wikipedia.org/wiki/Second-system_effect

As an aside, I'd love the Twitter engineering teams that fixed the fail whale to one day write about this as it related to fixing Twitter's stability and performance. It seems they simultaneously made major platform shifts and managed to fix things. Maybe it was done in small pieces, piece by piece and perhaps a SOA/Microservices architecture may have enabled that to feel like a rewrite without being one all at once. I'd love to hear the story. My understanding is that in addition to platform shifts, the actual engineering teams cycled out almost completely at least once, too. Really hard changes to manage through.


I remember seeing the fail whale constantly until around 2010 and honestly, as an almost daily user of twitter, i don't think I've seen it since and the site is one of the most stable I visit (via mobile.twitter.com).

Reddit could definitely use a lesson from them


I can understand the idea of starting from a simple spot, and iterating/evolving a system to more complexity.

But where does - or can - security fit into this?

That is, I've consistently found over my years developing, is that when making a "secure system" - ie, implementing some kind of permissions-based system on top of a login/password or other authorization scheme, for access controls and such...

...doing so after the fact (ie - bandaid-ing) - when the system has already become complex enough to "suddenly" need it - usually ends up being a very poor implementation with tons of holes. It also tends to be very grueling to implement, and lots of refactoring tends to be needed to accommodate the changes.

A similar kind of issue can also be found in the hardware world, specifically automated/robotic systems where you need to implement and think about certain safety measures, cutoffs, big-red-button-full-stop measures. Doing so after the fact can lead to missed edge cases or other issues that can make the system less robust and/or less safe - but only in retrospect after something fails to work from a safety perspective (and it is compounded by the fact that such a system will have both physical hardware and electronics coupled with software).

From that standpoint and personal experiences, I've always tried to stress for new projects that such parts of the system, if needed or thought to be possibly necessary in the future (which is pretty much almost guaranteed for all but the most simple examples) - that it should be designed and implemented "up front" - as the first part of the project. Get it in place and get it solidly built, to the best of your knowledge, then move on to the rest of the project built around that security/safety system(s).

However, such systems can tend to be or become extremely complex in their own right.

Am I wrong in this assessment? Should I not be advocating for such things for new designs? How do you balance this "law" with such needs? Stub all the things?


Gall's Law applies to the security part of a project too. A complex security subsystem needs to evolve from a simpler security subsystem. Waiting until the project as a whole is super complex and needs a complex security subsystem built from scratch is asking for failure.


The word "simple" depends on the context. What's simple for a human may be very complex for a machine and vice-versa. What's simple may be very different for different people and machines.

Most of the time, the word "simple" when applied to technology means "well-understood by the people who will use and maintain it". A simple thing can seem very complex for outsiders. For example, I used to think the C programming language was horribly complex, but now that I've spent serious time with it, I think it's actually rather simple.

Therefore, I think it's fair to replace most uses of the word "simple" in technology with "well-understood", like this:

A complex system that works is invariably found to have evolved from a well-understood system that worked. A poorly understood system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working, well-understood system.

I agree that permissions-based systems have necessary upfront complexity, but I also think they can be well-documented and well-understood.


This may have been one of the strongest reasons for Google Wave’s failure[0], which was alluded to many times on HN. It was too complex from the start (1M LoC), did not evolve very fast at all, started with a huge number of engineers, and was prematurely optimized (but still plagued with performance issues). As cool as Wave was from the start, Gall’s Law certainly applies to it.

[0]: https://news.ycombinator.com/item?id=3101201


Interestingly, the best ideas of Wave made their way into Google Docs, where they are frequently used today. Collaborative web-based editing, inserting images, tracking changes - we take these for granted now.

Docs (and its predecessor Writely) was the working simple system. Wave was a great complex system, but it failed and then the carcass was mined for ideas that could be applied to the working simple system.


Like Unix looted the corpse of MULTICS, I suppose.


And how both Windows and Mac looted the corpse of the Alto, and MacOS X + iOS looted the corpse of NextSTEP, and Linux looted the corpse of Gnu, and Android and the iPhone looted the corpse of General Magic, and the WWW looted the corpse of Xanadu, and most modern programming languages are continuing to loot the corpse of Lisp.

Actually, it seems like a lot of very successful companies get started by taking a failed project that was too complex, throwing out everything that was difficult, re-doing the core of it in a short period of time, and then slowly bolting back on the features that were discarded in hacky, inelegant ways that nonetheless work. Makes me reconsider my own possibly-too-complex-for-its-own-good startup idea to see if there's a kernel of usefulness that can be more easily achieved.


I agree with most of this...except the Linux/GNU thing. Early on GNU was an operating system in need of a kernel and Linux was a kernel in need of an operating system. Today we have a number of distributions which could rightfully be called GNU/Linux systems as its a symbiotic pairing of technologies.


That's a good observation. This also makes me wonder if one key reason why Ted Nelson's Project Xanadu didn't take off was because it was too complex from the start?


It was definitely the case with the Semantic Web: a much greater up-front investment with only hypothesized pay-offs down the road. Making simple things hard is rarely going to be a popular choice.


IPv6 to a tee.


Could systemd be described by Gall's Law as well?


I would argue yes, that it’s a good success story: it took a welter of init scripts, supervisors, logging, event-triggered notification systems, configuration conventions, etc. and provided a single standard mechanism which is easier to work with than any one of those was on its own much less the number of combinations most systems have.


Oddly enough, he was my pediatrician. I would never have guessed about his involvement in systems--though he did help spark my early interest in astronomy. He was a generous man.


I am a long-time fan, and have owned all three editions of his book (which started out titled "Systemantics"). Back in the day you ordered it from his pediatrics office and he sent it in the mail.


Just to be clear simplicity is the progression towards singularity and complexity is the progression away from singularity. Complex quite literally means to put together. The terms complex and simple are completely unrelated to easy or challenging.

That said, a task accomplished by a single nasty 5000 line function is three times more simple than using 3 small 10 line functions. Inheritance means to extend and is thus inherently complex.

The simplicity of a system is the result of the competing requirements the system addresses. A system that does one thing only is simple. A system that does a few things is complex. A system that does many things is more complex.

This can be measured objectively. There is a Github plugin called Code Climate that rewards a grade to code based upon things it considers cognitively challenging. Large functions and large files are punished by that grading metric. I prefer to organize my code with lexical scope because the structure of code directly reflects the availability of capabilities. That is more simple but it also tends to result in really large functions that are trees of child functions. In this sense the Code Climate grading metric punishes that form of simplicity and instead suggests breaking things apart into many pieces to be put together. That is a punishment of simplicity in a very direct way even though the result is clean code that is faster to read.


I think it is doing a disservice to push this as an objective measure. Not to mention, you reduce it to number of things. Some things are more complicated than others, after all. Regardless of numbers.

Still, I think, from your last, that we agree. Many intended objective measures actually don't lead to what they are pushing for. I see this in designs from peers all the time. They index on terms they understand, and miss the holistic system for it. It is very frustrating.


Complexity is objective, so a numeric measure isn’t good or bad. It’s the rules upon that number that are subjective.

I am curious about why developers find this frustrating. It is very commonly frustrating. Developers almost always claim to want simplicity until the terms are defined, which makes me wonder what they are actually wanting.


I contest that it is truly objective. Some complexity measures are objective. Most merely appear so.

Consider, who wins a sports game is objective and quite simple to define. It is whoever has the winning score. Now, the facets that go into all of the plays and other aspects of the game? Those leave the realm of simplicity quite quickly.


It pains me to think of the amount of damage this class of false positive is silently doing.


What false positive?


> Large functions and large files are punished by that grading metric. I prefer to organize my code with lexical scope because the structure of code directly reflects the availability of capabilities. That is more simple but it also tends to result in really large functions that are trees of child functions. In this sense the Code Climate grading metric punishes that form of simplicity and instead suggests breaking things apart into many pieces to be put together. That is a punishment of simplicity in a very direct way even though the result is clean code that is faster to read


Any HN reader who has not read Gall's book Systemantics (whose latest printing is under the title The Systems Bible: https://www.amazon.com/Systems-Bible-Beginners-Guide-Large/d...) really ought to rectify that. It's not perfect, but it will give you lots of things to think about, and Gall's witty writing style makes it a fun, easy read.

(I pulled my copy out just a few weeks ago so I could quote a different Gallism that was relevant to an HN discussion: https://news.ycombinator.com/item?id=18859680)


I am heavily reminded of OpenStack, which was designed by committee to help large companies' IT departments compete against AWS (and flopped), and AWS, which grew out of a flat file storage service (and very much succeeded).

It still amazes me how warty and ugly production code can be sometimes -- yet it works and puts food on the table. Meanwhile "beautiful" systems like Plan 9 languish on an old wiki somewhere.


This is good and rings true. I've taken it a step further with my definition of "Good Enough Programming" -- Implementing technology has two primary goals: it delivers value and you can walk away from it. To fail at either of these two goals isn’t good enough. [1]

The reason simple systems sometimes involve into successful complex systems is because development is completely decoupled from business needs. You code it, you watch to see how useful it is. Complex systems, on the other hand, are tightly-coupled. You code it, it needs changing, you spend a lot of time fretting over impedance mismatch, upgrades, patches, and so forth, instead looking and evaluating true value. In fact the more complex a system is, the less you're able to evaluate both its current and potential value.

1. http://tiny-giant-books.com/Entry1.html?EntryId=recj67HoP8cK...


Meta: wondering if it exists a law about all the various repertoried laws.

(Nothing negationist on my part but lately I've observed that I'm internally more critical and usually take all these laws with a grain of salt, I'm feeling less and less wanting to generalize everything, of course maybe I'm wrong but, it seems I've caught a kind of law-fatigue, I hope it's nothing serious :) ).


One candidate would be: “90% of everything of everything is crap” https://en.m.wikipedia.org/wiki/Sturgeon%27s_law, extending to reported laws.


Very interesting, this made me think of Pareto Principle and then as I read down, Wikipedia cites this. Also, I found it neat that Daniel Dennett uses this as a tool for critical thinking.


>> A complex system that works is invariably found to have evolved from a simple system that worked...

Unless the system had an intelligent designer behind it from the beginning. ;p

There is a general truth to the statement but I'm not sure that it's consistent enough to be a law.

Sometimes things can get overly complex and disfunctional in spite of having evolved to that point. Just look at the state of front end development today; TypeScript, WebAssembly, app bundling, Source Mapping, GraphQL, Protocol Buffers, gRPC... Most of these projects try to make things simpler by adding complexity; that's not possible. It's just madness and all that complexity does rear its ugly head from time to time in pretty much every project but developers just blame themselves instead of realizing that the root problem is the tooling.


I've collected a curated, abridged list of empirical laws:

https://mechaelephant.com/dev/Empirical-Laws/


I'd suggest Hanlon's razor:

Don't assume malice when stupidity is a sufficient explanation.


Too often I am bitten by that one. Some people are just nasty creatures. I think I assume good faith too much.


Isn't Gall's Law a tautology? Simplicity and Complexity is a catch up game. Thesis, antithesis and synthesis: Hegel's trascendence concept. First, let state the following Axiom or mechanism for explanation: To explain something you have to reduce it to simpler terms. This mechanism stems from how we use our language: something is complex when one is able to compared it with a simpler system that is functionally related. So the use of the word complexity requires a hierarchy, a ladder that connects simplicity with complexity.

An artificial mind could conceive and use a different concept of complexity for constructing an artificial language. For instance, multiplying two 64 bits integers is a simple instruction for a computer. In the analogy of the mind as a neural network we know that some problems cannot be solved with perceptrons. Who devises the divide line between a simple and a complex system? Our minds compose the graph of concepts and relations looking for a decomposition from simple to complex. But the definition of something simple is completely relate to the context in which one poses the problem. Building a computer, as the one I am typing on, was a infinite complex endevour a century ago, today that endevour is a solved problem.

Finally, in maths simplicity is rescaled: Once you solve a problem and find a method to solve related problems, then the initial problem becomes trivial and the pursue of complexity is restarted. The ontology graph in our minds is always evolving with new methods, concepts and intuitions, and the simplicity complexity scale measured by the distance in that graph is dynamically adjusted.

So I believe Gall's Law is a consequence of how we use the word complexity.

Edited: Grammar, spelling, and excuse me for my poor use of English.


i design lots of infrastructure for mining companies (physical mineral resources mining) and this holds incredibly true.

It's one of the biggest things I push back on when our clients have scoping meetings that just turn into endless strings of wants/needs.


There is only one system that doesn't abide by this law. That is the cosmos including the quantum cosmos. All at once all the complexity in the universe came into being. Basically its safe to say that "everything" disobeys this law.


Physics is still an unsolved problem. The Standard Model is complex (in that it has a lot of free parameters), but it's not the final answer, and hopefully whatever is underneath it is simpler.

I suppose you're talking about cosmology and not physics, but the story there definitely gets simpler the further back you look in time. The entire universe was an almost-uniform plasma for 300 thousand years or so, and once it cooled down it was almost entirely hydrogen atoms for another 100 million years. The only observable features would have been tiny changes in density from place to place, and that's where all of the complex stuff we see today (stars, galaxies, etc) came from.


> First the simple then the composite, such is the methodology of the human mind

Voltaire


Anyone know of any interesting counter-points or examples of exceptions?


This is how I learn new tech. If I’m using docker for the first time you can bet I’m doing an echo “hello world” first then incrementally building up from there.


Is hard, sometimes simple is relative to each person and can be misinterpreted


I disagree. Build a system too simple, and you'll end up with technical debt and can't scale, build it too complex, and it probably won't ever take off.

The first does give you a product that runs the risk of becoming obsolete once it's deployed, the second never reaches that stage. Survivorship bias is what makes Gall's Law even a thing.

I believe there's a sweet spot of simplicity/complexity for a successful system. But on the downside, finding this can lead to whatever scientific name this [1] phenomena has...

[1] https://xkcd.com/1908/


„Monolith first“ is a specialization of this.


Makes sense, that is how evolution works


But evolution is a result, there is no inherent design to it. I doubt you could apply Gall's Law to evolution, maybe just the first sentence?


I would argue that there is a design to evolution. At least in the case of sexual reproduction. True, its not one big, upfront design but rather an iterative design built up from the collective decisions of the partners.


I'm not saying that somebody designed evolution but that Gall's law makes sense because it resembles how evolution works, and we know that evolution works. We are the proof of that.


Something some prominent Linux programmers -- not naming names but one of them rhymes with Cuisinart Buttering -- would do well to keep in mind.


You mean the people who looked at many existing systems and built a replacement which solved many long-running nuisances while also being much easier to use?

I think about these reflexive hate affirmations every time I replace many hundreds of lines of buggy, hard to extend or troubleshoot SysV init script with 10 lines of systemd unit which provides significant operational and reliability improvements. There’s a good reason why most distributions switched, and why most experienced admins appreciate not having to deal with certain hassles ever again.


counterexamples: git, paxos, TeX


I don't think git counts as a counterexample. The initial commit had 1244 lines, including documentation. It now has over 200,000 lines of C. The initial commit didn't include branches, remotes, submodules, or many other git features I use daily. It just had trees, blobs, and changesets. A simple core of what git eventually became.


If distributed version control is not "complex" then what is? Git has mostly added features on top of a core model that was designed from scratch in one go.


Nothing in the git core cares about "distributed". It is distributed purely in the sense that it is copied many times.

A distributed system then that is not cognizant of it's distributedness does not incur the complexity of that.


Lucky break then that git core uses merkle trees and content hashing which require no coordination and make it easy to reject corrupt data


Git evolved out of passing around patch files by e-mail, which then evolved into Bitkeeper. By the time Linus wrote git, he had a very good idea of the workflow he needed, the data to be stored, and the OS tools available to him, since he'd been doing it manually for 10 years and had written much of the OS.

There's actually a counter to your counterexample - Darcs was a competing DVCS written in Haskell and based on a comprehensive theory of patch algebra. It didn't work - it was too slow to fit into people's workflow, despite being provably correct. Git - as basically the automatic version of emailing patch files around - was much more successful.


No, TeX was a second try. From the TeXBook:

"The \TeX\ language described in this book is similar to the author's first attempt at a document formatting language, but the new system differs from the old~one in literally thousands of details. Both languages have been called \TeX; but henceforth the old language should be called \TeX78, and its use should rapidly fade away. Let's keep the name \TeX\ for the language described here, since it is so much better, and since it is not going to change any more."


TeX78 was also highly sophisticated though. See https://www.saildart.org/TEXDR.AFT%5B1,DEK%5D1 from 1977, where core features like macros, horizontal and vertical boxes, boxes and glue, badness scores are all discussed in quite some detail. TeX underwent a lot of evolution but it was recognizably TeX, and had the sophisticated and coherent core features of TeX, from the start.


TeX evolved out of manual typesetting. All of the ideas of horizontal & vertical boxes, glue, badness scores are computer formalizations of the work that typesetters have been doing for centuries. Knuth's first order of business when he wrote TeX was to learn how printers did this stuff; he drew on 500 years of "simple solutions" going back to Gutenberg to write it.


One would expect that Knuth's first cut would be a good one. My point is that the thing we know as TeX is not the first implementation, and that TeX is not a counterexample.

Even in the tech note you reference, at the end, Knuth is referring to improving the initial system by following related work in math parsing (by Kernighan among others). And AFAICT the final Knuth-Plass line-breaking algorithm is an extension of the original approach described in that tech note. And hyphenation also changed.

According to http://walden-family.com/ieee/dtp-tex-part-1.pdf, the two implementations (TeX78, TeX82 = TeX) are in different languages (admittedly, both Algol family):

"...Over the period 1977–1984, Knuth himself wrote two completely different implementations of [...] TEX (called TEX78, written in SAIL, and TEX82, written in WEB)"

Here is how Knuth described the rewrite from SAIL to WEB:

"I needed to work out a better 'endgame strategy', and it soon became clear what ought to be done: the original versions of TEX and METAFONT should be scrapped, once they had served their purpose of accumulating enough user experience to indicate what such languages ought to be. New versions of TEX and METAFONT should be written, designed to last a long time and to be highly portable between computers and typesetting devices of all kinds."

*

Here's the kicker: These two implementations are separate from the older proof-of-concept implementation that Knuth asked his grad students to do while he went away, back in 1977:

"Knuth then left town for the better part of two months, leaving his students Frank Liang and Michael Plass to do a prototype implementation of TEX. Plass remembers that they managed to implement enough of TEX to process “a subset of TEX input all the way to XGP [Xerox graphics printer] output. . . . Knuth . . . seemed pleased with our efforts, and proceeded to re-code TEX himself.” Knuth later noted that Plass and Liang had prototyped only a fraction of the full TEX and that it was important that he code the complete program because he learned so much from doing the coding."

So, even the prototype TeX77 was implemented twice.


TEXDR.AFT is the document Knuth wrote in May 1977 for TEX[1], before a single line of code was written. This was the first thing he typed up into a computer, after about two months of thinking about the problem.

About a month later it was refined to TEX.ONE (both since republished in Digital Typography, along with some of Knuth's diary entries from the time).

This is the specification that his two students, Plass and Liang, tried to implement from, while he was in China. Then he came back and decided to implement it himself, which he did over the next year; the SAIL program now known as TeX78. His Pascal (WEB) rewrite was started in 1980 and finished in 1982.

So the person you're replying to is correct that TeX had its sophisticated features (as in those present in TEXDR.AFT) in its design, right from the start. Of course the current TeX82 is indeed a rewrite with thousands of changes, but much of the core features of TeX were present in the initial design.

Of course one could explain this "counterexample" with the observation that Knuth is not exactly a typical programmer, as he prefers to code on paper (https://news.ycombinator.com/item?id=10172924) and spent two months writing a detailed spec, and at least one month getting feedback, before any code was written. Perhaps his initial (hand-written) design was indeed simpler.

[1]: Aside: TEX was his original name for it, before the name conflict with Honeywell's TEX https://en.wikipedia.org/wiki/Text_Executive_Programming_Lan... forced/inspired him to come up with the Greek letters idea and rename to Tau-Epsilon-Chi properly typeset with the E going below the baseline, in turn represented in ASCII as TeX.)


I appreciate your sources and depth of knowledge. Thanks for the reply.

Perhaps it would also be worthwhile to go in the opposite direction: because of the design limitations of TeX, LaTeX has almost completely replaced TeX as the input language. In that sense, even TeX82 proved to be just a step on the way to a usable-by-people document system.


git is remarkably simple in the core premise. I know nothing about paxos and tex though.


git is incredibly simple and does not count as a counterexample at all. [1]

[1] https://wyag.thb.lt/


Did someone post this in hopes of starting a dynamic >> static language flame war?


Anyone find some counter arguments ?


The trick is to use abstraction, so complicated systems become simple systems.


“Hello leaky abstraction, my old friend,

I've come to talk with you again

Because a vision softly creeping

Left its seeds while I was sleeping…”


If the abstractions evolve over time you still in compliance with the law.

But if you try to think-up all the abstractions and their interfaces before you have a working system -- you are likely to find yourself in violation of Gall's Law.

Many expensive complex systems have been built this way, very few of them have worked :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: