
Software Complexity Is Killing Us - Tenhundfeld
https://www.simplethread.com/software-complexity-killing-us/
======
whack
To me, this was the money quote:

> _You might say, “But event sourcing is so elegant! Having a SPA on top of
> microservices is so clean!” Sure, it can be, but not when you’re the person
> writing all ten microservices. It is that kind of additional complexity that
> is often so unnecessary._

It's very easy to be wasteful when:

1\. It's not your time that's being wasted

2\. It's not your money that's being wasted

One of the best learning experiences I had, was building and launching an
entire application API, purely by myself, in my free-time while juggling a
full-time job. It really helps you accurately evaluate the cost-benefit
tradeoffs for various flavor-of-the-month technologies/methodologies, when
you're the one who has to shoulder every one of those costs.

I've had far too many team-leads whose assessments of these cost-benefit
tradeoffs are intensely skewed by the fact that it's other people who have to
work on those things, and it's other people's money that is being burnt while
working on these things. We think these are technical debates, but really,
it's a organizational problem. Under the right organizational structure and
incentives, people are exceedingly bright at optimizing for their personal
success. The principal-agent problem is a crippling handicap that hobbles
every medium/large business today. The day we find better ways to solve this
problem, is going to be a transformational day in the history of our
civilization.

[https://en.wikipedia.org/wiki/Principal–agent_problem](https://en.wikipedia.org/wiki/Principal–agent_problem)

~~~
joaofs
Couldn't agree more. The bandwagon effect is something also worth mentioning.
Tech stacks and tools these days are branded almost as consumer products and
the outcome is poorly designed systems.

~~~
lotyrin
The net present value to a developer of dooming their current employer's
current project by choosing an unsuitable technology which they could put on
their resume to negotiate future salary (at a position they'll be moving to
in, on average, months not years) is going to swamp betting on something
boring but stable, completing the project, staying put and crossing their
fingers for raises that outpace inflation.

~~~
majormajor
The root problem here is our hiring and interviewing system.

Imagine if interviewing rewarded "look I know all this stuff about writing
simple functions in Stack _X_ " less than "yes, the project shipped, and beat
its deadline, because we decided to use library _Y_ that the devs already knew
even with the acknowledged pain points, instead of bringing a new technology
in just for this project. We have some less-mission critical areas where we
decided to experiment with new tech, but needed something predictable here."

Evaluating a senior dev based on junior-dev-esque implementation tasks instead
of "does this person know how to avoid over-complicating our job?" is
ludicrous. As an industry we interview to find "clever" people rather than
"wise" people.

~~~
user5994461
Outside of web development, you can get a job fine without using this year
framework. However, you still have to leave after a year to get more than
inflation.

~~~
squeeeeeeeeeee
> you still have to leave after a year to get more than inflation

This hits home. I've been in software development for quite some time (10
years), I've been with 4 employers so far (full time) and pretty much all my
significant raises have come as a result of me looking for a better paying
position and then leaving. I may have been unlucky (I also did not work for a
really large company yet), but I feel like the "career develoment"
opportunities within software engineering companies are way under the level
they should be.

~~~
reassembled
I think this has been the way of the tech industry, and particularly SV, for a
long time now. My uncle, now passed away, started at Atari in the early 80s
and hopped companies every three years until his final job with Nvidia in a
fairly senior position. He stayed with Nvidia until his death a little over a
year ago.

Each time he switched it up he grew his salary exponentially.

~~~
tincholio
Exponential growth, every three years, since the 80s... sounds a bit
hyperbolic, doesn't it? Just how much did he make at Nvidia??

~~~
TeMPOraL
Exponential growth means just constant growth rate. Doesn't mean it's a big
growth rate. 2% a year will double the starting salary in... 35 years.

;).

~~~
brango
No it doesn't. It means an exponent is present.

[https://en.wikipedia.org/wiki/Exponential_growth](https://en.wikipedia.org/wiki/Exponential_growth)

~~~
jamessb
Constant _growth rate_ gives rise to a quantity that is an exponential
function of time [not to be confused with a constant _rate of increase_ ,
which gives rise to a linear function of time].

As that Wikipedia article says, the continuous-time equation for exponential
growth, x(t) = x(0) e^(kt), arises as the solution to the ODE x'(t) = kx,
where k is the constant growth rate.

Similarly, in discrete-time, exponential growth follows the equation x_t = x_0
(1+r)^t, where r is the constant growth rate.

~~~
brango
I stand corrected.

------
blunte
Blah _bullshit_ blah.

The reason software is so complex now is that our expectations as users is
orders of magnitude higher than it was "in the good old days". I'm old enough
to remember the good old days, so I can speak with a little authority here.

The Pareto principle may not be exactly accurate, but it describes many
situations quite well. And in software, it fits very well. 80% of the
requirements take about 20% of the code (and complexity). It's the edge and
special cases that add the bulk of the work.

When a program just did one useful thing, we humans would do our human thing
and integrate several different programs with our manual effort. If an edge
case appeared, we didn't even identify it as an edge case - we're built for
handling edge cases! We just made the minor _human_ judgement and fixed the
data and pushed it into the next program.

Software is complex because we're trying to make it replace more human
activity. And beyond the core functionality, human activity is all about
applying human judgement. If anyone is familiar with people training people on
a particular task, particularly the quaint concept of apprenticeship, it
involves teaching and showing and mentoring until a student has enough
knowledge AND wisdom to get the job done.

So to boil it down, modern software is primarily complex because it attempts
to replicate human judgement and wisdom.

That software complexity is not killing us. It may be a pain in the ass, but
if anyone can remember what it was like before we had these tools, I would
dare to say that it's still a whole lot better than before computers.

~~~
ravenstine
I disagree. Users, in reality, don't care about all the crap that stakeholders
and some developers care about. They don't care about Material design, fancy
animations, beautiful buttons, "elegant" code,a ton of niche features, mobile
apps(no I'm not kidding), your annoying push notifications, your fancy menu
bars that you also fix to the top of the screen for whatever reason, your
autoplay videos, your little "delightful" asides that interrupt the content,
etc. They want something that _works_ without much effort or worry that the
software will fail, and that doesn't necessarily have to do with how fancy or
feature rich an app is.

Craigslist and the Quartz app are just a few examples of what I'm talking
about. Nobody except maybe some stakeholders ever asked for those things to be
more fancy than they are. They are simple and they work. The end.

Unfortunately, we are in an era where everyone wants to be like Google, or at
least thinks they need to. I do not believe that the rest of the world would
be worse off if they stopped spending time making "production ready" software
that visually competes with the Big 5.

Software becomes complex because we allow it to become complex. Just because
you can add another feature doesn't mean that you now have more value for a
finite amount of pay for X developer time. Companies seem often blind to
continual costs for support that go up every time each time the solution is to
add a new feature or software package for a small segment of the customer base
or the company itself.

Whomever is in charge should acquiesce less often. But everyone wants to be a
wizard.

~~~
TheCoelacanth
> They don't care about Material design, fancy animations, beautiful buttons

If they don't care about those things, they aren't doing a very good job of
voting with their wallets. It's much easier to sell a product that is pretty.

> "elegant" code

That isn't for users' benefit. The people paying to maintain the software do
care how much it costs to maintain the software. Elegant code is an attempt to
make it cheaper to maintain.

> a ton of niche features

People buying software do care about that, and they are often a distinct group
from users.

> mobile apps, your annoying push notifications, your fancy menu bars that you
> also fix to the top of the screen for whatever reason, your autoplay videos,
> your little "delightful" asides that interrupt the content

None of those are for users' benefit. Those are for the benefit of customers
(advertisers).

So perhaps saying that _users_ expect more isn't telling the whole story.
However _stakeholders_ certainly expect more.

~~~
TeMPOraL
> _they aren 't doing a very good job of voting with their wallets_

I'm tired of repeating this, but here it is:

 _Voting with your wallet. Is. Bullshit._ Consumers don't pick from the space
of all possible products, they choose from what's available on the market.
Especially in software, it's producers who collectively determine trends. If
they all fall into a fad, customers have no way of saying no.

~~~
geodel
Exactly right. Just like producers have decided that all upcoming editors,
chat apps etc have to use Electron. They all talk as "Overwhelmingly users
have spoken that they want bloated, slow text editors with million plugins for
next gen development."

IMO they are exactly like military-industrial complex where increasingly
complicated software will need newer and complicated hardware because
"customers demanding it".

------
nulagrithom
The company I'm at now has been building the "most simple", "low-code"
solutions they can find for 15 years.

We're constantly talking about the Pareto principle: "80% of software can be
written in 20% of the time." And we optimize for building that 20% of the code
in the simplest, fastest way possible.

The problem is the other 80% of the code, the part that accounts for 20% of
the features, ends up being necessary before full adoption of the software
takes place. 80% working simply isn't good enough. But since we've optimized
for that easy 20% using low-code solutions, the other 80% of our code ends up
being twice as painful. We end up deciding it's too expensive to continue and
attempt to force use of the 80%-finished software as-is. Historically, this
hasn't turned out well for us, and users end up compensating via spreadsheets
or even _paper_.

It takes a couple years for this whole process to play out. The project is
often declared a success since 80% of it got done and deployed so quickly. But
there's always those last nagging bits that, upon further inspection, have led
to the app/feature/whatever being abandoned later on.

> It is that kind of thought process and default overhead that is leading
> companies to conclude that software development is just too expensive.

It _is_ too expensive for them. If you're building a house and get sticker-
shock at the cost of building the foundation, maybe you're in too deep.

Please don't skimp on the foundation.

~~~
marktangotango
Hundreds of comments before someone addresses this part of the article,
shameful. The other aspect to 'low code' solutions is, as software developers
we _have to_ chase the next hot thing, for employability and career
development. No one wants to be stuck knowing only cold fusion when that goes
away. I make the argument that web frameworks like Rails and Django are the
4GLs of today. 4GLs are roughly analogous to 'low code' I believe.

------
whistlerbrk
I don't get this article, the author identifies the issue immediately:

> in the process, we often lose sight of the business problems being solved

And then goes on tangents as to why that is happening finally arriving on what
tantamounts to "developers don't like to use visual UI tools".

The original cause identified is right, people lose sight of business goals,
but why? Because they have poor project management and they are in the
trenches all day long. They want to do right by the code base, so they
abstract things which don't need to be abstracted. The failure here is one of
trade off analysis and again losing sight of what is important, shipping a
usable product which meets business goals.

~~~
emodendroket
I mean overall I feel like this article is going in an unconvincing circle
because it is trying to solve a problem that doesn't have a solution. "We want
software to be faster to market than ever, have higher availability than ever,
have more complex feature sets than ever, but also have higher quality and
accommodate constantly-changing requirements. Also it needs to be secured to
an extent that would have not necessarily been necessary pre-Internet. Oh and
we'd like to keep down wages as much as possible as well."

------
forg0t_username
The irony is palpable here... The website loads slowly, elements move around
the page during the first load, then the fonts blink around and go from system
font to some custom font. It loads data from 15 different domains, and four of
those are only for tracking.

~~~
michaelmcmillan
True, but hypocrisy does not have much bearing on the validity of the
arguments.

2 + 2 is 4 regardless of Hitler claiming it over Feynman.

~~~
alien_at_work
That's not entirely true and your analogy is not comparable. Person A makes a
claim that X is good and everyone should be doing it. Person A does the
opposite of X. That doesn't prove one way or the other the truth of the
arguments but it does call into question what the goal is with the arguments.
There could be many reasons for such a discrepancy but a _common_ one is
manipulation.

As an example, if a crypo currency executive says their coin is fantastic and
everyone should invest in it, but sells their entire holding does one not
question the goals of such a statement? Note: this example is also not
directly applicable to this article (at least I hope it isn't) but it's closer
than your example was.

~~~
michaelmcmillan
I will concede that it is not completely analogous. But you are talking about
something else, namely the value of the intent behind a claim.

As I argued, intent has no bearing on the truth value of the claim, but it
_can_ – as you point out – be correlated with the truth value. I stand by that
dismissing an argument based on intent is fallacious. You have to honestly
deal with an argument – regardless of the messenger – to assess its truth
value.

~~~
alien_at_work
I don't disagree but I also don't see where that was suggested. The OP just
pointed out the radical difference between what was preached and what was
practiced. This could be seen as a dismissal but it could just as easily be
seen as a call to action or just pointing out something amusing.

~~~
michaelmcmillan
Sorry, I did not mean to straw man you. Re-reading your comment I realize that
you didn't disagree, but were rather pointing out something else.

------
sloxy
I also think it should be noted that compromise between designers & developers
& business domain experts (seemingly now called 'product managers) could
vastly simplify all this.

The more empowered a developer is to over-ride a designer/biz person's
preference[1], the better it is for all.

Way too often, I see developers just accept what the product manager/designer
specifies with zero pushback when it strays outside what could be considered
the 'norm' for whatever tech stack is in play.

[1] I use the word preference because that is all it is - their preference.
They don't have a magic ball.

Sidenote: I'm also seeing a rise in UX suddenly becoming owned by the UI &
product management team which is bizarre. You have designers(whether from
print backgrounds or whatever) making poor(& I mean piss poor) UX decisions
because they don't have the exposure/understanding of web platforms. Case in
point: if your UX person has a calendar widget to enter a date of birth, they
should no longer be allowed do UX.

/rant

~~~
noxecanexx
Honest question. What's wrong with using a calendar widget for DoB.(engineer
here)

~~~
verletx64
You weren't born on a range of days.

You're unlikely to change your mind about when you were born.

You really only ever need to provide three well defined numbers.

So, why display a calendar?

~~~
PeterisP
Well, one issue is that there are different date formats used worldwide and
there's a risk that the website will accidentally swap the month and date if I
simply enter three numbers without thinking about them. mm-dd-yy and dd-mm-yy
are both used occasionally.

~~~
Blaiz0r
There are also different calendar formats.

[https://msdn.microsoft.com/en-
us/library/cc194816.aspx](https://msdn.microsoft.com/en-
us/library/cc194816.aspx)

------
zitterbewegung
I totally agree with the article. I would like to add that it is also the fact
that as a culture we stress features over bugs.

Everyone says that security/logic bugs/ invalid data is a problem. I haven't
worked for anyone that prioritizes security and bug fixing over features. As
we gain more and more time in a product and with deadlines that don't make
sense we basically get a whole pile of code that is unable to be tested in any
amount of time given.

~~~
onion2k
_we stress features over bugs_

I don't think we do. We stress the needs of the user over _everything_ (and
rightly so in my opinion). Often that means a new feature is more important
than a lot of minor bug fixes, but equally a critical results-in-data-loss bug
or an important security patch will take priority over any new features. It's
just that those things are actually quite rare in production code, so it feels
like features always take precedence.

~~~
digi_owl
> We stress the needs of the user over everything

Imagined needs of an imagined user perhaps. And more and more that imagined
user is a drooling idiot best left in a padded cell it seems...

~~~
onion2k
There's no need to imagine what users need. There are plenty of great products
to capture data about what they do in your app, and you can always talk to
them as well.

As for them being "drooling idiots", I find that if a user is getting
something seriously wrong it's almost always because a developer or a designer
built something that's failing at being easy to use rather than the user being
at fault.

------
vinceguidry
I stopped being concerned about this specifically a couple of years ago.
Complex software does not need to be refactored, it needs to be rewritten. The
important thing that the legacy software does is firm up the domain. With the
concepts and language set, rewriting to encapsulate new semantics is
relatively easy. It's when you have no idea what the domain is going to look
like where it's hard, and that makes for complex, quickly-becoming-legacy
code.

~~~
BjoernKW
On the contrary, software should rarely ever be rewritten from scratch.

Rewriting existing software both is incredibly risky from a business
perspective and potentially jeopardises the hard-won knowledge from years of
working on and working with the software (also see Joel Spolsky's take on
this: [https://www.joelonsoftware.com/2000/04/06/things-you-
should-...](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-
do-part-i/) ).

As tedious as it may be sometimes, refactoring software and gradually
replacing parts after having written the appropriate tests that make sure the
software still works as expected almost always is the better long-term
approach to reducing complexity and improving software quality.

~~~
vinceguidry
I'd like to see that happen just once in my career.

~~~
user5994461
Seen it happen more than once. The new guys throw away the old thing because
it's a mess and it's too hard to understand. Two years later, the new new guys
throw away the new thing because it's a mess and it's too hard to understand.

The company's website is ever new.

~~~
vinceguidry
No, I meant the scenario where a company listens to the engineers and lets
them put off business goals to spend weeks to months every so often
refactoring, ending up with a modern, easily-maintained system instead of an
arcane mess.

My solution to the problem when it was my problem to deal with was LITFA
(leave it the fuck alone) until business priorities allow replacing it with a
solution that's managed by other people. Obviously the reason it turned legacy
in the first place was because insufficient resources didn't allow proper
maintenance. Well if maintenance was too hard, rewriting with your maintenance
staff certainly won't drive a better outcome.

The only way I see either refactoring or rewriting working is if LITFA
introduces an existential risk to the company, and the company really, truly,
actually does have enough resources on hand to do a proper job of it. But I
don't believe anything short of a bet-the-company situation is going to really
light a fire under people's asses.

~~~
watwut
I have seen refactorings, but it does not involve putting off business goals.
It involved one part of team refactoring a part of codebase while other part
of team is producing new features on another part of code, so that at least
some business goals are met.

You also do it piece by piece, because half done refactoring is worst then
none.

One potential disadvantage of LIFTA is that you miss learning. Bad legacy code
does not happen just because no maintenance, but more often because the
original architecture is not suitable for all requirements and it requires
hacks. When you (or team) are writing the same thing second time, you write it
differently and better then first time.

There is also a question of who should do the refactoring - and it should be
someone who knows requirements and has good judgement (e.g. not a new junior
you barely know).

------
GVIrish
I think there are a couple of different problems driving overly complex
software.

1\. Cargo-culting. The article talks about this, but one thing I've seen
consistently over the years is that something new comes along, and people
latch onto it like it's the solution for every problem, everywhere and you're
stupid or outdated for not thinking the same. It's the whole, "RDBMS's are
dead!" thing.

2\. Shiny-new-thingism. This is slightly different than cargo-culting in that
people don't necessarily think the new tech is the ONLY way to do things, it's
just that people want to use something new because it's hot. Then people get
deeper into it and realize that there are some unknown unknowns that bite them
in the ass. This becomes more likely the more abstracted things are, and/or
the more moving parts there are. It's not that old technology doesn't have
that problem too, but there are more charred carcasses of other people's
failures to learn from.

The other thing is, as a developer there is a strong urge to try to use new
tech because 'hot tech' = 'higher pay'.

3\. The business requirements were not well-understood when the project was
started (and may still not be well understood). Sometimes you're told to start
building X, but you find out over time you really should have been building Y,
while the customer asks for Z (no wait, W!) and you never get the time to pay
off that technical debt. This is probably the hardest problem to address and
is a large reason agile caught on.

4\. Bad software architecture. Maybe you inherit software that has a crapload
of technical debt because of point #3, or maybe it's because the previous team
made some really bad design decisions from the get-go due to
incompetence/inexperience. So then your choice becomes, do you tear it all
down and start over, or do you limp along and add to the skyscraper made of
popsicle sticks and bailing wire? Oh, and there's no requirements!

So you're left performing software archaeology like Indiana Jones, trying to
figure what it was that a previous civilization was trying to do. Then you
have the horror movie moment where you realize, "This...never worked, it was
wrong all along..."

~~~
Tenhundfeld
>The other thing is, as a developer there is a strong urge to try to use new
tech because 'hot tech' = 'higher pay'.

That's a great point. We developers do often engage in "resume driven
development" – often not intentionally, just subconsciously observing that
some tech is hot and there's a lot of demand for it. So we look for excuses to
use it, which is of course backwards.

------
titzer
The only way out of this box, IMHO is proper layering of software and proper
interfaces. Human brains have a limited capacity to manage complexity. There
is only so much you can fit into your head at one time. (I dunno, maybe it's
just me, but I've been around long enough to have hit this limit repeatedly,
and started forgetting about code I have written in the past).

To combat this, humans require abstraction. A proper abstraction fully
encapsulates the implementation details and provides an interface that does
not require knowledge of the details behind the abstraction.

We have too many leaky abstractions. Bugs, performance issues, brittle
software broken by updates. Language features that break in complex ways when
users make mistakes (hello C/C++!) A leaky abstraction lets its details spill
out through the backdoor. These cause cognitive load and make it impossible to
layer software properly.

The only way to build a big system is proper abstractions.

Until Spectre and Meltdown, ISAs were good abstraction layers.

Today, still, my personal opinion is that the kernel system call layer is one
of the strongest abstraction boundaries that we managed to enforce. Above this
layer, it's all a mess. Below this layer, it's all a mess. (To a first
approximation).

We aren't getting anywhere until we get the layers right.

------
Meai
I think the culprit are the programming languages, they offer all these
features that allow complexity. If you allow it, people use it. It sounds like
an oversimplification of the problem but I honestly think that's the entire
problem summed up. The second layer are all the legacy layers that we build up
on. Nothing can be done quickly about it but as soon as you make a better
language, people seem to almost instinctively jump at fixing the legacy layers
too. People write an entire OS in Rust and Rust isnt even simple. We just need
a simple, fast and safe replacement for C and I believe all the messed up
parts about software would get fixed very fast.

~~~
GlennS
I'd like to offer an empirical counter-point: Java.

Java, while a good quality language overall, is deliberately missing some
important features like function passing (possible these days...). This is an
effort to make it easy to learn and to encourage people to build simple
software.

Its library ecosystem (with a few superb exceptions) is the epitome of over
complex mediocrity. Java defines "Enterprise Bullshit".

Why has this happened? I think it's the missing features. You get a sort of
jenga tower where the foundations aren't quite right, so you build some other
abstraction on top of it. But then that's not quite right either, so you build
the next layer back in a different direction.

More powerful languages means you need fewer layers. And, it's easier to
rebuild those layers if they turn out to be wrong.

~~~
watwut
Java has pretty good and very rich ecosystem, including huge open source
community. And also, it is not that difficult. Even current "Enterprise" part
of that eco system is not that difficult. As for complexity of some
frameworks, that was:

a.) Learning step with new tools available.

b.) Result of people attempting much ambitions projects then before with zero
experience.

People always loved to hate Java. Nevertheless, it performs in environments
where attempting to something else invariably fails.

~~~
lmm
> As for complexity of some frameworks, that was:

> a.) Learning step with new tools available.

> b.) Result of people attempting much ambitions projects then before with
> zero experience.

Disagree. A lot of Java frameworks are genuinely complex because the language
is too simplistic to implement them, so each framework ends up having to build
what's effectively its own extension to the language. Even well-designed
frameworks are like this. Jackson's module registry and annotation-mixin
interfaces aren't there because Jackson's developers are inexperienced or
overambitious, they're there because to implement a JSON serializer you need
that kind of functionality and there's no sane way to implement it in the
language proper. Likewise Spring's magical autowiring, or its bytecode-
manipulation-generated magical proxies that are used to implement things like
@Transactional. Likewise Jersey's resolution from annotation classes to
.service files with magical filenames to find out how to serialize a given
entity. And so on. The frameworks are complex because the problems they're
solving are complex; simplifying the language beyond the point where it can
express the complex problem doesn't make the problem any more complex.

> People always loved to hate Java. Nevertheless, it performs in environments
> where attempting to something else invariably fails.

I've had a great career replacing Java with Scala, where the language is
powerful enough that you don't need any magical frameworks, only plain old
libraries with plain old datatypes and functions. It's much nicer to work
with.

~~~
peoplewindow
Spring's auto-wiring isn't like that because of language limitations. You
don't need DI to write Java. It's like that because some people love writing
reflective frameworks. It's easier than modifying compilers but harder than
just doing it the "obvious" way, so it satisfies a lot of programmers i-want-
to-stretch-myself itch.

------
iamleppert
Lack of architecture is to blame, and poor design. I'm sorry to say after
having worked in software professionally for over 10 years, that most people
just suck at any kind of reasonable design or architecture. You can attribute
it to lack of experience, incompetence, or lack of care, whatever, but most
people are just bad.

The only metric an engineer should be judged by is: do they make everyone
around them better? How quickly can a new developer come up to speed in their
codebase and make meaningful contributions? It's okay to start out slow, but
in my experience the #1 sign of a bad engineer is someone who gets slower over
time. If the costs to implement a feature are increasing, not decreasing, you
need to take a hard look at your team and what kind of hot mess people you are
really working with. If you're in a position of power, that might involve
letting people go, and worse, if you're just an engineer, that might mean
finding a new job yourself.

~~~
carlmr
Similar experience. Although I think the biggest issue isn't the initial
design, but all the requirements that slowly add cruft. You need a team that
religiously follows the [Boy Scout
rule]([https://medium.com/@biratkirat/step-8-the-boy-scout-rule-
rob...](https://medium.com/@biratkirat/step-8-the-boy-scout-rule-robert-c-
martin-uncle-bob-9ac839778385)), and you'd have almost no issues even without
huge refactorings or redoing everything (where you often make new mistakes).

I've been in teams like that, and in teams that don't care. The Boy Scouts
performed consistently better. And before anyone says anything, Girl Scouts
can do the same ;).

------
Yhippa
Shout out to my homies in RVA.

There are so many complications to this issue. I imagine a long time ago
things were less complex and more manageable. As you build layer upon layer of
new tooling on top of the past I feel things were just naturally going to get
complex and unmanageable. It's almost like for us to have avoided this we
would need some programming paradigm that stands the test of time and is
flexible enough to meet today's needs. Which is probably impossible.

Unless we are in the Cambrian explosion of tooling and frameworks and at some
point we will achieve a steady state. That would be really cool to see in my
lifetime. Every time I think "ah yes, this should be good enough" I am
blindsided by the new shiny.

That's also a separate topic. There's plenty of incentive to use the new shiny
if it's not the right tool for the job. Tech gets hyped by the usual engines,
CTO's buy into it, and then that's our tech stack. You need to have these new
frameworks on your resume to become employable so when you start a new project
you end up using one of them.

I don't know what the solution to this is other than the hope that there's
some general building blocks or concepts of software development that will be
easy to make work with other building blocks.

------
jondubois
Software companies these days are ridiculously inefficient.

Unfortunately, this is because the logic that goes on in peoples' heads when
it comes to selecting tech stacks often boils down to this:

1\. x newer than y.

2\. The company which created x has more money than the company which created
y.

Therefore x is better than y.

Q.E.D. Now, let's immediately refactor all our company's code to use x.

Complexity and relevance to the task are not a factors at all. That's because
big software companies don't need to be efficient - They usually have a
monopoly so the money will keep coming no matter what... Complexity is a tool
that middle-managers use to create more work so that they can hire more people
and thus get more responsibilities and bigger bonuses.

If all tech decisions where made by non-technical people using the 2-point
criteria mentioned above, companies would be using exactly the same tech
stacks as they are using today.

Developers today are geniuses when it comes to using stacks in ways that they
were never intended but they are absolutely terrible when it comes to
selecting the right stacks for the job to begin with.

------
noxecanexx
One thing I am yet to see is a critical analysis of over-engineering. I
constantly see articles like this that talk about how those young developers
are looking for the newest and shiniest technology and I think that's just a
lazy answer to the causes of over engineering. I for instance have tried my
best to run away specifically from the new shiniest things like Kafka,
webpack, mongo. But this is obviously a flawed strategy as those technogolies
my help solve my problems. One might say careful cost/benefit analysis is the
answer but it's most times easy for me to point out how much more beneficial
the tech I am introducing is. And it could be but it would still be over
engineering. Plus it's not really solving the real problem that is adding more
incidental complexity to the solution than needed. It would interesting to see
an article on causes of over-engineering from real examples so one could
extract potential solutions.

~~~
peoplewindow
A large part of it is because for most programmers, it is the only way to
demonstrate growth in their own skills.

Relatively few developers get to lead projects, design truly interesting
systems and so on. Most developers, especially in bespoke business software,
end up writing large amounts of fairly un-challenging code day in day out.

But then if you just churn out the same sort of business logic for years on
end, with little access to problems that are inherently challenging, how do
you feel any advancement in your craft? For many developers it is through
over-engineering or pointless re-engineering.

Common signs: tasks that are actually trivial suddenly start using home-grown
frameworks. Obsession with vague rarely defined terms like "best practices",
"clean code", "modularity" etc. An insistence that practices which were once
considered an advance were actually a regression, like the recent trend to
claiming that exceptions are bad because they make control flow "hard to
understand", which justifies rewriting everything to use return codes, or XML
is bad because everyone claims it is, but JSON with schemas is totally not the
same and definitely how architectures are done Right™.

To get a perfectly simple design and implementation that is nonetheless
powerful you really need a perfect match between the time/capabilities of the
engineer and the inherent difficulty of the problem. That way the developer's
mental energies will all be spent designing something that will work, and
relatively little is left over for inventing new config file formats.

~~~
noxecanexx
The funny thing is I actually faced this problem a lot last year. Since I read
paul grahams essay on hackers I decided to focus that will for hard problems
into my side projects(although this has taken my focus from popular frameworks
to specialized libraries)

------
space_fountain
Hopefully I'm not in the minority when I say there's another side to this too
though.

Working now at my first job I can't tell you how annoying it is to try to
update ancient php with essentially no code reuse. Sure it's simple, but
basically unmaintainable.

Also from the same job, but I continue to work hard to try to convince others
that using prepared statements is worth the slight extra effort. If I hadn't
made it significantly easier it would never get used at all and we'd see way
more string appending

~~~
onion2k
Please don't complain about old code not written to today's standards. There
are reasons why code can be bad and you don't know what they are. Accept the
code is what it is, improve it where you can, and learn something about the
_crap_ a developer before you have had to go through when they wrote it. Even
the worst hacks often aren't because they were a bad developer.

You'll have your share of terrible projects that force you to write horrible
code without decent requirements to unreasonable deadlines for a client that
won't listen. You'll hate what you write. You'll always want to go back and
rewrite it but you'll never have the budget. You might quit your job over it.
It certainly won't be a reflection of your skill or passion for writing great
code. The "ancient PHP code" you're working with now probably isn't a
reflection of the developer who wrote it either.

~~~
space_fountain
You're definitely right. My point got a little side tracked by my own
partially unfounded frustration. I'm just trying to say some level of
abstraction is worth it. Not always, and not in every case, but at least in
some. Yes the code wasn't written with abstraction in large part because it's
old, but it would have been better if it had been.

------
trhway
>Well, a few things. First is that experienced developers often hate these
tools. Most Serious Developers™ like to write Real Software™ with Real Code™.

"experienced" is the key here. With most of these tools you develop 90%
functionality very fast, and there is no sane way to develop the rest 10%. The
hacky/ugly/umnaintaneable ways you go to add those 10% into the beautiful
tower that the tool built for the 90% ... You become very "Serious
Developers™" after doing it several times and noticing the "pattern". And it
helps that with time you are also becoming much better with the Real Code™.

~~~
commandlinefan
The common observation that experienced developers avoid (insert supposedly
productivity-enhancing tool here) is usually dismissed as some form of
machismo/old dog being unable to learn new tricks, but it ought to impress on
somebody how universal this is... and ask, maybe, why it's so universal? I
don't dislike visual basic, or GWT, or ruby on rails, or angular CLI because
I'm an old codger who doesn't need them fancy new fangled young person
fashionable toys, but because I tried them out and found that, although they
do make a pretty GUI appear on the screen a lot quicker than I can with vi,
they also fail in unexpected ways that are nearly impossible to diagnose. So
now, rather than spending a lot of time coding and a little time debugging
(and knowing exactly what I'm debugging and knowing that when I find what's
wrong, I can change it), I spend a little time coding and a lot of time trying
to guess what the hidden, internal model of the application is hung up on. All
with a skeptical boss asking me why it isn't done yet. I always give the hip
new thing a chance, and I always find that it trades off some up-front
complexity with a lot of impenetrable mystery - along with the expectation
that the "time saving" tool will actually save time rather than shuffle it
from the beginning to the end.

------
tomc1985
> A lot of developers right now seem to be so obsessed with the technical
> wizardry of it all that they can’t step back and ask themselves if any of
> this is really needed.

"Can we? _Should we?_ " <\--- something NOBODY in tech seems to ask any more

Developers need to learn to _go with the flow_. Very often a platform or
toolset makes a specific technique or manner of working easier than the
others, but that option might be discarded by the implementer because it
doesn't strike their fancy. Fuck your fancy, go with the god damn flow!

~~~
megous
Which one?

------
jiaweihli
This can really cut both ways, and moderation is key.

I've seen software that's extremely overengineered. Something like ~15
microservices with data (for the same model) split across both Mongo and
Redis, spun up via Docker configs hosted on Google Kubernetes Engine. Run by 1
person for a product with low (<1k DAU) traffic.

On the other hand, I've also seen software that's extremely _underengineered_.
And that might not be a disjoint set. The same stack I mentioned above
implemented an in-house search, task/message queue, deploy system, URL
shortener, and mobile notification service - all with a more restricted and
buggier API compared to more popular commercial / open-source offerings.

It's highly important to choose your areas of competence. What systems need to
be better in X vs. off-the-shelf alternatives? And can you afford to dedicate
time and money (through development and maintenance) to support it?

===

I'm going to state a likely unpopular opinion now, and advocate that most
typical SAAS / CRUD projects start with Rails, and build from there. I say
that because there are many, many well-maintained and battle-tested libraries
that have already been vetted in that ecosystem, and the community's mantra is
to optimize for engineering productivity (GSD and communication with other
engineers) and happiness. There are libraries to set up an ingest pipeline
[1], or a pooled background task system [2], or whatever other complex systems
you need to build. But when you're just getting started, you want to set up an
API with some validations against business logic, and ActiveRecord and
Rails(-api) gets you pretty far very quickly for just that.

Another related unpopular opinion I hold is to avoid Node, unless you have a
specific reason to use it. Yes, it's new and flashy, but the cost of that is
that the ecosystem hasn't matured and tooling you'd expect doesn't yet exist.
That's not to say that every new technology should be avoided. I think of it
as investing in a sense - you need to place your bets correctly based on
research like community, maturity, functionality, and possibly most
importantly: how it compares to existing tools (and their trajectories).

[1] [http://shrinerb.com/rdoc/files/README_md.html#label-
Direct+u...](http://shrinerb.com/rdoc/files/README_md.html#label-
Direct+uploads)

[2] [https://github.com/mperham/sidekiq](https://github.com/mperham/sidekiq)

~~~
meesterdude
> I'm going to state a likely unpopular opinion now, and advocate that most
> typical SAAS / CRUD projects start with Rails, and build from there.

<3

Rails can get you very, very, far. While it's sexy appeal has faded, it's real
beauty has shone. I can hop into most rails codebases and be productive in a
few hours. I can come back to my own projects and _still_ understand whats
going on. I think that's huge. Sure there are newer things out there, but
nothing beats rails for productivity or maintainability and overall developer
happiness - tenants i hold in high regard.

Exploring new tech is great - doing it in production is asking for trouble of
all sorts. If you want to build something that lasts for a long time, build it
as boring as possible.

------
jeffdavis
This article is about the complexity of software _development_.

The complexity of software itself (and hardware, as we have seen recently) is
a much bigger problem.

More and more machines are playing a larger role in our lives and decisions;
but we don't come close to understanding them. In about a decade, things will
just "happen" with no clear cause (the actual cause being some emergent
behavior of out complex systems), and we will revert to pre-enlightenment
mystics and superstitions.

------
ynniv
Step 1: Solve the problem with the most mature technology that is commonly
used for that problem

Step 2: Fight the urge to use something newer

~~~
root_axis
Better to just evaluate the tools in context of the problem, that way you're
always open to the best tool for the job regardless of how old it is.

~~~
ynniv
Nope. New tools have a lower expected lifespan (due to survivor bias) and less
proven value. I love new tech, but a LAMP website will last forever.

~~~
root_axis
> _New tools have a lower expected lifespan_

That doesn't have anything to do with whether or not that tool is the best one
for the job. Further, this logic is clearly fallacious because I could just as
easily say PHP is too new and you should use C and CGI. Clearly, there are
qualities that make one tool better than another for a certain type of
problem, so making an evaluation of the available tools per problem is the
best way to go.

> _I love new tech, but a LAMP website will last forever._

But LAMP is not always the best solution to the problem. For example, if one
of your application requirements demanded streaming live data to the browser
via websockets, PHP would be a bad choice compared to something like Go or
node. Another example might be a webservice that takes some input and has to
do computation-heavy work like image processing or rendering, it'd be a big
mistake to implement something like that in PHP rather than than, say, a JVM
based language.

~~~
meesterdude
> if one of your application requirements demanded streaming live data to the
> browser via websockets, PHP would be a bad choice compared to something like
> Go or node. Another example might be a webservice that takes some input and
> has to do computation-heavy work like image processing or rendering, it'd be
> a big mistake to implement something like that in PHP rather than than, say,
> a JVM based language.

A nontrivial problem with this line of thinking you employ, is that it expect
you'll be proficent/efficent in all of these languages & toolings, when in
reality you may just be a really good PHP dev.

It can be beneficial, for sure, to have an idea of where something would do
well; doing calculations in SQL vs in server code, for example. But jumping
right to it can be foolish if there is a cheaper / "faster" way to do it
first.

Part of tool evaluation needs to evaluate your own proficiencies, and the
ability to delay optimizations. Maybe PHP is crap for websockets, but maybe
you can spin up lots of machines to handle the load and that's good enough; or
maybe (in some ways more likely) the project goes nowhere, and that
optimization that could have been, was never needed.

~~~
root_axis
> _in reality you may just be a really good PHP dev._

If your only tool is a hammer...

> _Maybe PHP is crap for websockets, but maybe you can spin up lots of
> machines to handle the load and that 's good enough_

Well sure, you _could_ treat every problem like a nail by going to whatever
lengths necessary to solve the problem with a hammer, but that is a waste of
time and effort since a more appropriate tool could magnify your productivity
instead of frustrate it.

~~~
meesterdude
you can do a lot with a hammer.

It is my view that it's better to bang out what you can with the tools you've
got, and later reach for better tooling when the need or pain point truly
arises; instead of trying to optimize from the start with tooling you don't
know for perceived potential future productivity. And I consider this not to
be a waste of time and energy, but an optimization of available resources.

Often, building with what you know is more important than hacking together
with something you don't. Especially if you're trying to build a product on
it.

Of course, building something like a web app in excel would be entirely the
wrong approach; your tools have to at least be in the ballpark of
applicability.

------
weberc2
> Much of this reduction has been accomplished by making programming languages
> more expressive. Languages such as Python, Ruby, or JavaScript can take as
> little as one third as much code as C in order to implement similar
> functionality. C gave us similar advantages over writing in assembler.
> Looking forward to the future, it is unlikely that language design will give
> us the same kinds of improvements we have seen over the last few decades.

I have lots of hope for language improvements (specifically type system
improvements) to help manage software complexity. Generics marked a
significant improvement over C or Java 1.0 styles of development. First-class
sum types means I don't have to think about how to implement them as a pattern
and weigh the tradeoffs or make sure all of my conditionals cover every
scenario. Rust's memory model uses types to give strong correctness
guarantees, and is currently probably the best way to write software when a GC
simply won't do. TypeScript and MyPy employ gradual typing to help developers
get the best of both dynamic and static typing. I suspect that there are a lot
of ideas coming out of type theory that just need a good path from academia
and into real-world practice.

------
jaydenseric
The main problem is the environments we have to code in are garbage; the
simplest solutions to complexity are often standardized at the
language/environment level but implementations are years late.

There is more money in building products and coming up with workarounds than
there is in improving the environments we all have to work with. There are
also too few smart people in the world available to keep up with demand for
both activities.

Instead of 10 people keeping IE up to date with modern JS language features,
we had millions of people people trying to come up with and use genius
solutions (Babel anyone?) to transpile and polyfill their code backwards.
Instead of 4 people adding support for ES Modules to Node.js years ago, we
have have many thousands of people trying to work out WTF to do on their own.

The solution is not to freeze technology in its currently deficient state and
to be content, that attitude on behalf of environment maintainers is how we
ended up here in the first place!

JS will be "mature" and infinitely more productive once basics like modules
are implemented in all servers and clients. It will get better eventually.

~~~
gboudrias
> JS will be "mature" and infinitely more productive once basics like modules
> are implemented in all servers and clients. It will get better eventually.

JS is in a really weird and unique place as the _only_ programming language
available for browsers (assuming you want it to work on all browsers). Think
about that, when's the last time you've been constrained to a single language
outside of the web? Since everyone has to use it, changes in the language
itself have to be extremely slow, otherwise it breaks existing stuff.

Any discussion of JS is inherently phenomenological because there's nothing to
compare it to. I personally would love it if we simply started from scratch
and slowly phased JS out. I'm sure attempts have been made :/

------
rmrfrmrf
There's so much bias in these articles. In Javaland, JSP, Spring, OSGi, and
servlet containers all introduce massive amounts of complexity, yet are
ignored by "real engineers" in favor of (inexplicably) bashing Kafka and
Redux.

Stop labeling something "complex" just because you're unfamiliar or
uncomfortable with it.

------
Chiba-City
Great piece. 1) The title might be "Incidental Complexity is Killing
Software." 2) Typing speed of scripting languages is truly a HIGHLY deceptive
metric. Server resources are not free, and unit testing is wasted on type
checking which compilers do cheaper. 3) For different sad reasons, software
organizations CANNOT manage to do cradle-to-grave resource planning and
schedule assessments all the way through sales, training, help-desk and MRO.
4) Why software ENGINEERS fetishize tool and equipment purchases like
consumers I'll never understand. The application wins are cool. The tools are
tools. 5) The developer stovepipe desire to "have their part work" is another
contributing pathology. Software delivery is a team marathon styled sport.

------
darioush
The amount of boilerplate and number of different technologies you need to tie
together to have a polished, presentable project with possibility for end-user
traction is insane.

You're going to want two different mobile apps (or a cross-platform solution).
You're going to need a whole bunch of widget and formatting libraries.

You're going to want a bunch of tie ins with authentication providers.

You're going to want some form of social media integration.

You're going to want a back-end served on a cloud.

If you're pushing it you'll still want a website. Which of course comes with
it's own myriad of CSS and JS frameworks.

Compare this to the simpler days of the where all you needed was a mostly
static HTML page with some PHP and you could plop it on a server via FTP and
viola you were in business.

------
cmcginty
Agreeing with other posts that this is mostly bullshit. Software is complex
because every project is constrained by limited resources. There is no perfect
language to develop in. There is no perfect framework or platform to create
your app on. Yes, a lot of projects might go too far down the architecture
rabbit hole, but just as many bend the other way not using proven tools trying
to "keep it simple". For every project that is stuck using 2-ton framework,
there is another that wrote a custom sub-set of said framework with 5x as many
bugs, and no one but themselves to support it. Usually there is no correct
choice, only compromises. The sum of these compromises is a complex software
project.

------
TYPE_FASTER
After having worked with a few really really good developers, who had both
amazing productivity and almost zero defects, simplicity is now my goal.

~~~
meesterdude
> simplicity is now my goal.

Simplicity is a great tenant, I find myself often shooting for clarity a lot
as well.

------
krasicki
Software _IS_ "the business". Software has always been complex. To make things
easy in the application experience the complexity is pushed down. More to the
point it is not so much that software is any more complex but the volume of
software knowledge and awareness is overwhelming even to those coding the
stuff. Platitudes about simplicity are a dime a dozen. Rather than despair
take comfort in knowing you are not alone in feeling swept by a tide of
perpetual software churn.

------
Annatar
“C gave us similar advantages over writing in assembler.”

This is a profound misunderstanding of both C and assembler: C gave us
portability, not reusability; writing shared object libraries with reusable
code in assembler has been the bread and butter of programmers for decades,
and it takes no longer to do in assembler than it does in C. A stereotypical
example of this is the Commodore Amiga, where most of the shared libraries are
written in assembler.

------
emodendroket
> So if we have to develop the interfaces, workflow, and logic that make up
> our applications, then it sounds like we are stuck, right? To a certain
> extent, yes, but we have a few options.

> To most developers, software equals code, but that isn’t reality. There are
> many ways to build software, and one of those ways is through using visual
> tools. Before the web, visual development and RAD tools had a much bigger
> place in the market. Tools like PowerBuilder, Visual Foxpro, Delphi, VB, and
> Access all had visual design capabilities that allowed developers to create
> interfaces without typing out any code.

[...]

> And many companies are jumping all over these platforms. Vendors like
> Salesforce (App Cloud), Outsystems, Mendix, or Kony are promising the
> ability to create applications many times faster than “traditional”
> application development. While many of their claims are probably hyperbole,
> there likely is a bit of truth to them as well. For all of the downsides of
> depending on platforms like these, they probably do result in certain types
> of applications being built faster than traditional enterprise projects
> using .NET or Java.

Huh? I challenge this guy to start up a WinForms project in Visual Studio and
tell me what's substantially different from the Access forms designer. But
this comes with its own limitations (it gets messy fast if you want to do
something "non-native," collaboration is a challenge, it encourages sloppy
mistakes like misaligned controls, poorly named controls, and form logic
interleaved with business logic) and, more importantly, doesn't actually do
the hard part. Creating buttons and boxes can be tedious if you don't have
some kind of tool to do some of the work for you, but even in an environment
where it's totally done by hand it's not where the bulk of effort in a project
goes.

------
bfung
A recurring common theme every so often.

The blog post isn't really focused though - the solution proposed is to "keep
it simple".

Click on the "Simple Thread" menu option - ah, all makes sense now.

A "Software is too complex" blog post by a Custom Software Development shop.

------
tremendulo
I wonder whether the most stable yet complex and versatile form of software
will turn out to be AGI. Could a mind be the only complex 'tool' that doesn't
require a small army of other minds to maintain it?

~~~
Joeri
I wonder if like a real mind it will suffer from psychological afflictions
that require therapy by an AGI shrink to fix.

~~~
sigi45
Perhaps the software engineere is the AI who is here to write the code and
everyone around us is there to provide the necessary everything else for them?

;D

------
luord
I know it's a different issue but this reminded me of the developers who
mention five design patterns for every requirement listed. It's often
pointless and, more often than not, it's easier to just describe what is it
they mean when they mention them.

I've picked a stack, it's simple, straightforward enough, easy to pick up and
understand and fits most applications, and every one I've tried to work on my
own. I still try to experiment with new tools, libraries, etc, but often they
end up as just that: experiments.

------
Silhouette
_Languages such as Python, Ruby, or JavaScript can take as little as one third
as much code as C in order to implement similar functionality. C gave us
similar advantages over writing in assembler. Looking forward to the future,
it is unlikely that language design will give us the same kinds of
improvements we have seen over the last few decades._

That seems a bold assertion. I can think of many, many ways that today's
mainstream languages could become incrementally more expressive, safer, or
otherwise better. Quite a few of those ways have been implemented, just in
less mainstream languages. The language trap is that to become successful you
need not just the language but the surrounding ecosystem. It's not that there
aren't still very significant gains available over the current popular
choices.

 _Our obsession with flexibility, composability, and cleverness is causing us
a lot of pain and pushing companies away from the platforms and tools that we
love._

I don't think real flexibility and composability are the problem. In fact,
throw in longevity and you've got a decent foundation there for building good
software.

I think the problem is that a lot of what's going on in certain parts of the
software industry today, particularly almost anything connected with web or
mobile apps, is only offering an illusion of those benefits. A big, monolithic
framework isn't really more composable than a carefully selected set of
libraries, it's _less_ composable. A tool that requires hundreds of lines of
configuration to build a relatively simple application isn't really more
flexible than a few lines of shell script, it's _less_ flexible. A million
tiny dependencies because you couldn't be bothered to write a few five-line
functions yourself aren't really giving you more longevity, they're _less_
likely to make things reliable in the future.

I would argue that cleverness, in this context, is mostly about having good
judgement. Are the costs of starting from a big, monolithic framework or
relying on an extra tool going to outweigh or be outweighed by the costs of
any lock-in and long-term dependency? Are the benefits of composing very many
small components going to outweigh or be outweighed by the costs of keeping
track of all the relationships?

------
skadamat
Reminds me a lot of the article from the Atlantic ("The Coming Software
Apocalypse"):
[https://www.theatlantic.com/technology/archive/2017/09/savin...](https://www.theatlantic.com/technology/archive/2017/09/saving-
the-world-from-code/540393/)

------
raspasov
I agree. If not “killing” at least “severely” slowing us down. This Rich
Hickey talk deserves a link here and it’s right on point:
[https://www.infoq.com/presentations/Simple-Made-
Easy](https://www.infoq.com/presentations/Simple-Made-Easy)

------
Nomentatus
Simplicity is measurable along one dimension, by how easy software is to
maintain, I believe. This can be measured started early on a project if one
cares to spend a little money to test for this from time to time (hiring or
assigning someone outside the project), so maybe we should be doing more of
this.

------
swayvil
Srsly. I've started just copying all my shit into the relevant package instead
of all the subclassing. Because you know I'm gonna want to change that shit in
the future. And whether I change the class or the subclass shit is gonna
break. And then I gotta fix that shit. So I'm a caveman now.

~~~
aphextron
There’s a legitimate argument for choosing reasonable code duplication over
inheritance. It comes up a lot in iOS development when you just need a new
UIView with a few tweaked string values. DRY should not be an unassailable
mantra. Keeping a flat class heirarchy keeps programmers sane.

~~~
binarycrusader
Or as the Rob Pike puts it, "A little copying is better than a little
dependency."

[https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s](https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s)

------
meuk
Related: [https://www.joelonsoftware.com/2002/11/11/the-law-of-
leaky-a...](https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-
abstractions/)

------
dawhizkid
How often is it the case that incidental complexity is actually accidental vs
a conscious/subconscious act to increase one's own job security?

~~~
meesterdude
from my experience, consultants put more emphasis on job security than
salaried employees when making decisions like this; while they are actually
more likely to do resume driven development. When things go south, they jump
ship.

~~~
dawhizkid
isn't optimizing for "resume driven development" exactly building overly
complex systems so you can say you worked on complex systems in your next
interview?

------
gfiorav
Way too late to the party but the issue here is that you're valuing developer
convenience over product maintenance.

Maintainability >>> Convenience

------
zpatel
Agile dev has also contributed to complex software.

~~~
ern
In what way has agile development contributed to complexity? In my experience
it's been quite the opposite: Agile has led to better focused features that
solve real problems faced by the customers, while minimizing waste, finally
making a dent in Essential complexity.

Unfortunately, there has been a parallel explosion in Accidental complexity
(over-complicated architectures, cargo-cults, crazy-complex auth
frameworks...).

I'm beginning to wonder if developers (especially consultants, as others have
mentioned here) are trading one form of complexity for another in order to
keep their billable hours up, or improve their job security. Cloud vendors
also seem to be contributing to the problem as well.

~~~
zpatel
Ok don't get me wrong, agile dev has its benefits but usually its not used
properly especially where I have worked in past. Project managers just care
about completing features in days , when in fact the analysis and design of
such features would itself require days or even weeks.

------
shamas
There are more technologies between C and a platform where you drag drop
things... Who the hell develops with a drag drop interface?

------
halfnibble
I agree with 99.87% of what the author wrote. But unless all of your
developers are using the exact same OS and libraries locally that are
installed on your production server, then you really ought to be using
"containers." Because who wants to debug library differences on production?

By the way, if anyone is interested in trying out an incredibly productive web
framework, I recommend checking out Django. Need an API? The Django Rest
Framework.

------
meri_dian
Software Complexity is _Employing_ Us

------
wellpast
These posts are a dime a dozen. Many developers get to that place where they
realize the wide gap between what they suspect, or intuitively know, can be
done easily/quickly in their software system but for some... reason... just...
_can't_ be done quickly, easily. What they know should only take a day (it
only took a day when I started building the thing!) now takes two, three, a
week...

For the reflective types we start to see, correctly, that it's all rooted in
"complexity". I have to _deal_, _contend_, etc. with all these _needless_
_things_ that I didn't have to deal with originally. This "algebra" of symbols
was once easy to do math in, but now I can barely add two and two.

Some laugh uneasily at this point and say, "whelp, that's software for you"
and go home and smile at themselves in the mirror. Others mull on the idea but
don't apply enough rigor of thought to get past the fuzzy thinking where many
a soul that's pondered this problem has found themselves stuck.

And so we get posts like this that are more lament and helpless than they are
useful. You can toss a rock and hit these posts, they're all over the place.

The only way out of the complexity hell is rigor in thought at every level--at
the level of our jobs ultimately, but in this case at the level of analyzing
what _is_ complexity. That is by giving it an objective description, and
showing what is and isn't complexity.

Many of these dime-a-dozen posts proceed to talk about complexity in relation
to abstractions or lack thereof, etc. But they always miss the metrics of
complexity.

Complexity is the lack of Simplicity, that's it. And Simplicity is
_objective_. I'm not trying to be exhaustive here but you should get the gist:

Clear interfaces Clear and minimal dependencies Precise semantics/contracts
Minimal coupling

In any case, these are objective measures. And interestingly have very little
to do with the lines of code that implement them.

Give me a system and tell me what it does. Now I don't think there is a proof
(yet) that will say that a system is optimal with respect to these simplicity
dimensions but I can tell you that if you give me another but differently
organized/architected system that does the same thing, one will be
_objectively better_ on each of these dimensions than the other and it will be
very easy to point and show you the relative winner.

So once a team realizes this, you can optimize the simplicity of your system
by having your members give their design pitch and the objective winner will
emerge. (Yes, getting your business axioms pinned down, priorities, etc, is a
hard problem here, but the fact that I've seen very few people track this
whole realization means we haven't gotten _there_ yet. We're still trying to
reach Milestone 1.)

Only when Milestone 1 is met, have you gotten your own personal team's
optimally simplest solution. But you're still as strong as your team's weakest
link (ie, your team's weakest designer/simplicity-optimizer.)

Now here's the thing. To get _good_ at building systems that optimize for
simplicity you have to get _good_ at it, and to do that you have to _practice_
it for a long f-ing time.

I've been in industry for a long time and have worked with very smart people,
top brass from the big FANG blah blah blah, and what many of these guys get
good at, unfortunately it is not simplicity optimization. Not because they're
not smart (in fact Alan Kay's "IQ is a lead weight" is spot on in these
cases), but because that's not what they have been optimizing for in their
careers. They simply have not been focused on simplicity optimization.

To be fair, you actually hurt yourself by trying to optimize for this skill
set because it takes so long to master. So if you take time to master it, you
are not spewing out code and getting raises with everyone scratching their
heads a year later why is everything so complex.

But this question of complexity is all very simple, if not easy: Simplicity is
objective. It takes a long time to master it as a skill set.

Why is every single post/discussion about complexity trying to think about or
solve it some other way? I suspect it's because no one wants to realize how
short they are on the skill set--which sadly is _most_ of our industry from my
emprical point of view-- "It must be somethign _out there_ that is the
problem, because _I_ am smart, _I_ have been doing this for a while, no, it
can't be because I am _short_ a skill...

What I can say for sure though is that the skill set can be acquired with
practice. But where is the training simulation, where does one go to practice?
In industry, you really have to be quite vigilant and aggressive and
calculated about it...

~~~
wellpast
Correction: You're as weak as your team's _strongest_ link.

------
dondenoncourt
Here! Here! Personally, I'm getting tired of refactoring NoSQL back the
standard SQL and merging multiple "Microservices" apps into one manageable
app. And figuring out the JS Single-page framework Dijour when standard Ajax-
based pages would have cost far less to create and maintain yet provide the
same functionality.

~~~
MrBuddyCasino
I think over-engineering is just plain old incompetency in a different
costume. Good guys successfully manage complexity, bad ones don't. The bad
ones have just more options now.

~~~
cookiecaper
The question is how do we successfully communicate this to everyone else.
People see "We need to move this to super-cool new technology like THE
GOOGLES!!!" as a positive, affirmative, and cutting-edge thing that will make
them more like the coolest, smartest company there ever was, whereas they see
"Why should we change that?" as backward and frightened.

We need some way to protect the industry from marketing-oriented impulses that
mischaracterize flat, reliable, comprehensible architectures as primitive.

