
The Law of Conservation of Complexity - tiago_simoes
https://medium.com/outsystems-engineering/the-law-of-conservation-of-complexity-1-simple-rule-6578a2bbfdbf#.nmnrerlgp
======
couchand
Well this article is certainly trying to demonstrate itself through a meta-
circular hiding of complexity. But it just comes across as overly simplistic.

------
PaulHoule
I never feel more productive as a programmer than when I have a day when I
delete a lot of code.

Fred brooks made a distinction between accidental complexity and essential
complexity. The problem you are solving has a certain amount of complexity and
based on Ashby's law of requisite variety, your system will need to be complex
enough to manage it.

If you are fighting with your build system, feeling stupid because you can't
understand monads, struggling with a big ball of mud or have your head
spinning from too many microservices, that is accidental complexity and it can
be reduced.

Many people won't do it, because it means you have to stop and think.

------
jondubois
I agree with this article but I think that unnecessary complexity is always
under threat of being replaced by simpler solutions. I do appreciate the
message that everything is always more complex than it seems once you start
digging into it though.

------
nickpsecurity
This post seems wrong in so many ways. For one, there's different goals for
complexity. Making everyone be an expert machinist for coffee vs pressing a
button is about simplification. Then, we look to see what's in the box to make
that happen. I'd be surprised if my Mr Coffee[maker] I got for $30 with a few
buttons, circuits, and components is that much more complex than first,
mechanical coffeemaker. It's quite simple in design. Some things got simpler
over time while a lot increased in complexity.

The next statement is about legacy IT. Not legacy IT and mobile but legacy IT.
It's systems whose complexity increased for social and economic reasons
beneficial to who knows who by now that massively increased complexity for
developers and often users too. Was it inherent in solving the problem? No.
Not at all. There's a small segment of people that regularly post stuff here
that's simpler, higher-quality, etc than what's common. When correct &
reasonable complexity are design goals, then one gets software that's way
simpler than it otherwise would be. The extremes of that are probably Niklaus
Wirth's Oberon and Chuck Moore's Forth. Wirth's take on it is below but
manageable complexity is something many small players practice. The market
doesn't reward it very much but it's doable.

[https://cr.yp.to/bib/1995/wirth.pdf](https://cr.yp.to/bib/1995/wirth.pdf)

[https://news.ycombinator.com/item?id=9733520](https://news.ycombinator.com/item?id=9733520)

Another on Wirth that's illustrative of the Web counterpoint is the Juice
project. Javascript and Java applets were the mainstream approaches to
increasing power of web browsers. Complex, hard to define, relatively slow to
interpret/compile, and slow runtime. Juice applets were based on Wirth's
Oberon. It's memory-safe, compiles lightening-fast, and runs pretty fast. To
deal with dial-up while maintaining type-checks, they compiled the applets to
compressed AST's that preserved type information that the browser finished
compiling into machine code after checking. It went nowhere because inertia
and the market but was very simple, safer, and faster.

"As you can see, complexity is constant."

No, it's probably not. It can be reduced in many cases by changing how you
express your ideas. Not to mention simplifying the ideas themselves. Then,
just because it moved doesn't mean overall complexity isn't reduced. Putting
complexity in one space to benefit countless users is much less complexity
than trying to teach countless users to all do the same, complex thing that
must integrate with your backend that must anticipate all the ways it will be
abused. So, the complexity of each aspect of the problem and reuse of
solutions must be considered. It's why high-assurance markets for both
security and reliability have focused on putting tons of effort into reusable
components they can adapt to new use-cases. Same level of complexity in the
component followed by an extension or integration cost. Much less labor and
money than clean-slating the whole thing.

"For example, think about the jump from Assembly to C; complexity moved to the
compiler. Consider how we went from unmanaged languages to garbage collection:
complexity moved to the runtime."

A simplistic look. Instead, the nature of the problem changed with solutions
proposed. Some were pretty simple. The garbage collection one is a real laugh
as getting one, garbage collector right is way easier than getting right all
memory-safety right in all programs ever written on that platform. The latter
is seemingly-boundless complexity. Likewise, even CompCert compiler (design &
ML code not proof) is probably simpler than most applications if you wrote
them in assembly language. I can only imagine trying to do Excel that way.
Funny thing is I used to use it as an example but later found out they had
their own compiler. They found it easier to maintain a whole compiler for
themselves than to keep up with the features or breaks in other compilers. It
was work but actually reduced the complexity of their problem. Funny stuff.

"Then there’s the evolution from waterfall to agile, distributing complexity
over time."

There was an evolution from ad-hoc practices to iterative stuff and something
like Waterfall depending on shop. Then, there were things like Cleanroom, RAD,
and Spiral that did more iterative. Then Agile showed up with extra stuff on
top of it with a mix of simplifying and extra complexity. Cleanroom and Spiral
kept things simple with Cleanroom forcing basic primitives used in step-wise
refinement with human verification of code and usage-based testing. Very low
defect on first try for most teams while most other methods were failing.
Cleanroom was actually _simpler_ than most methodologies or tooling of the
time but more complex than throwing code together.

"Every time we took these steps, we were skeptical and afraid because we felt
we were losing control."

Many were. Most of it had nothing to do with complexity. Generally, managers
forcing something on developers for social and economic reasons. Developers
stayed pushing stuff they thought would make things better with mix of
evidence and ideology. The main drivers for almost everything.

"As promised, here’s the simple rule for deciding how to deal with complexity.
Focus on where you can innovate. As for everything else? Just let it go."

Prior articles on HN suggested looking at each component's complexity instead.
You consider using it only if it works, is well-documented, and straight-
forward enough to maintain yourself if they abandon it or make changes you
don't like. Otherwise, you ignore it. It's often that simple. The hard stuff
you take time to understand, thoroughly test, document, and use within the
bounds of what you've come to understand works. Also, if it's a 3rd party, I
recommend having multiple options with portable code so you're ready to leave
at any time with minimal, transition cost. I've seen too many burned by
trusting 3rd parties too much.

~~~
AstralStorm
I like the rebuttal but would disagree that addressing memory safety without
GC is harder than writing a correct garbage collector that addresses your
specific performance needs. Former is solved by formal proof, latter as well -
though there is less to prove it requires much more involved proofs for a
nontrivial and especially concurrent application. Stories related to badly
written GCs exist in quite a number in the internet. Plus action at a distance
by GC makes debugging harder in general. Rust's way with static checking is
better, writing a good correctness proof is yet better still.

~~~
nickpsecurity
Appreciate the reply. Yeah, I need to make extra clear on my GC claim I was
implying the usual divide between things done in unsafe languages like C or
languages with a GC. Getting all the code in a platform memory-safe
consistently in an unsafe language is much harder than putting in solid effort
for a GC. The counterpoint you use works both ways on that: one team published
a paper on using Rust to build a GC with its safety boost. Full correctness is
much harder but that's true for programs in general. There's at least some
GC's that were verified already and just separation logic could probably
handle at least pointer safety. Microsoft's VCC tool makes that much easier
than in past although still much harder than just Rust for average developer.

So, GC's do reduce difficulty. Getting a GC right in a language like Rust is
easier than getting arbitrary apps through borrow checker or arbitrary apps in
unsafe language done safely. There's a reduction there in most cases that
aren't toy problems, CRUD apps, etc on the app side. The complexity isn't
equal.

------
threepipeproblm
Specious. We are told that alternative designs always have the same
complexity. When one device seems simpler than another it's really "hiding"
that complexity somewhere.

Probably not the best article for an audience of people who largely understand
computational complexity analysis.

------
canjobear
Doesn't increasing entropy imply increasing complexity?

~~~
pka
To my very limited understanding, no.

Basic example, puring cream into a cup of coffee. At first low entropy, low
complexity (cream at top, coffee at bottom). Then higher entropy, high
complexity (cream and coffee mix according to laws of physics, different
concentrations at different locations in the cup, streams and such). At last,
highest entropy but again low complexity (everything is homogenous).

