
Paradoxes of Software Architecture (2012) - jmgtan
http://www.informit.com/articles/article.aspx?p=1963779
======
AnimalMuppet
Extreme Programming had this idea of doing "the simplest thing that could
possibly work". Don't get hung up on lots of architecture or design; make
something simple that works. But of course, that doesn't work for long - the
requirements get more sophisticated, and the simplest thing no longer works.
Then you have to refactor/redesign/rearchitect. And that's OK, _because now
you know which direction to do it in_. But don't do more than you need for
now, because you don't know which direction to do "more" in.

Design up front for reuse is, in essence, premature optimization.

~~~
pan69
I guess that's sort of true but often you know what direction you will be
heading, just not now. I see it like a game of chess, you want to be able to
at least predict the next few moves and cater your architecture towards that.
However, this is an acquired skill that takes practise.

I also think there is a difference between architecture and features where
keeping it simple as possible is more about the features and less about the
architecture, i.e. keep the features as simple as possible but have your
architecture cater for the future.

------
Joeri
Interesting to think that complexity can be a result of trying to do proper
software architecture. It occurs to me that the way software is typically
built probably approaches the problem of complexity in software architecture
from the wrong end. We try to build software systems that minimize complexity,
have predictable behavior and operate under controlled conditions. Most
software has extremely low tolerances for aberrant behavior.

Looking at biological systems though, that is the opposite of how reliable
systems are constructed. If we design from the assumption of failure and
runaway complexity, with watchdog and recovery processes, and using emergent
behavior from small discrete components to obtain large-scale results, then we
can build software components that are much better able to recover from
unexpected interaction and produce more reliable systems. It seems to me that
this is what the theory should be. I just have no idea how to do that in the
real world when building a web app, aside from recycling server processes
every now and then.

~~~
vog
_> If we design from the assumption of failure and runaway complexity_

This is a good pattern (one of many), but it is no silver bullet, either. If
you overdo this, you end up with overly defensive programming, where all code
is littered with catching theoretical error conditions that will never occur.
That hides the actual functionality and makes code really hard to read and to
change.

The solution pattern for this is to move checks at the boundary of the module,
so that the code inside can make safe assumptions on the data it receives, so
it can concentrate on the real functionality.

As the article says, whatever you do, you can overdo. Software architecture is
always about balancing and judging conflicting appoaches and goals.

~~~
ionrock
One thing that is challenging in managing the boundaries is that often times
our languages don't prescribe enough layering. We usually have modules,
classes / functions and methods to define our boudnaries. The real world
dictates we deal with networks, protocols and content types, yet our
languages, environments and communities prefer to recommend simplicity at the
language level.

I think functional languages have a more healthy appreciation for layering by
nature of requiring functions to act on data rather than having mutable data
types. I believe if we want to reduce complexity we _HAVE_ to create more
layered systems to scope the bounds of our complexity. The problem is many
people see this as over-architecting things. With time, a design can evolve
such that 10 or 20 layers, as compared to the 3 in MVC we often see, could be
manageable with help from our languages and environments (IDE / text editor /
etc.).

------
ewest
The article references a paper from 1980 called "On Understanding Laws,
Evolution, and Conservation in the Large-Program Life Cycle". I just skimmed
it - incredible insights that continue to be relevant.

~~~
kabdib
Elsevier wants $36 for this article. Ha.

[https://cs.uwaterloo.ca/~a78khan/cs446/additional-
material/s...](https://cs.uwaterloo.ca/~a78khan/cs446/additional-
material/scribe/27-refactoring/Lehman-LawsOfSoftwareEvolution.pdf)

------
ExpiredLink
The current trend in software architecture is called "Sacrificial
Architecture" by Fowler:
[http://martinfowler.com/bliki/SacrificialArchitecture.html](http://martinfowler.com/bliki/SacrificialArchitecture.html)
.

You don't even try to "evolve" software but periodically rewrite it which
"solves" the "Paradoxes of Software Architecture" in one fell swoop.
Disposable instead of durable software.

~~~
innguest
Alan Kay has been saying that at least since the 1970's, but the likes of Joel
Spolsky have always argued against it. Now comes Martin Fowler to tell us that
Alan Kay was right, without crediting him, of course.

~~~
jalfresi
Thats interesting, any links where I can find out more of Alan Kays ideas on
this subject?

~~~
innguest
Alan Kay famously went through several iterations of rewriting the first
Smalltalks from scratch and he is a big believer in that approach. I don't
know if it's easy to find him talk about that explicitly but here's one
example:

[http://queue.acm.org/detail.cfm?id=1039523](http://queue.acm.org/detail.cfm?id=1039523)

> AK I had the world’s greatest group [in the Smalltalk era], and I should
> have made the world’s two greatest groups. I didn’t realize there are
> benefits to having real implementers and real users, and there are benefits
> to starting from scratch every few months.

I'd recommend his talks at OOPSLA or anything you can find on YouTube.
Especially 'The Computer Revolution Hasn't Happened Yet' is a gospel for
contrarians.

------
collyw
I have been having similar thoughts recently. Someone we work with was
demonstrating some module he had built. My boss asked if he could make it do
something. He explained he could make it do that, but didn't want to as it
would break the loose coupling he had developed. Loose coupling is nice to
have but surely functionality should come first if it is needed.

