
Ask HN: Is Accidental Complexity Growing Faster Than Essential Complexity? - ern
I&#x27;ve recently been involved with a few line-of-business projects recently that have been fairly successful. They all use well-honed agile principles, and they met the requirements that their Product Owners had of them, and by all measures were successful.<p>However, simultaneously, I&#x27;m seeing a great deal of technical complexity being introduced to systems: microservices and SPAs (with all the attendant web stack complexity) being the most common. A lot of it seems to be driven by a need for developers to keep their CVs shiny, or because smart developers want cool things to play with, or because of &quot;Google envy&quot;. There seems to be very little engineering justification for these approaches in most cases, and technically simpler approaches would work better.<p>I&#x27;ve been thinking about Fred Brooke&#x27;s Mythical Man Month, in which he described 2 forms of complexity: Accidental Complexity as being technical in nature, while Essential complexity relating to the problem domain, and being more intractable.<p>Although Essential complexity remains stubborn in business systems, I believe we are slowly getting to the point where it is being tamed through more responsive (agile) dev practices, leading to happier customers who get software that does what they expect. On the other hand, I feel that accidental complexity is exploding, and for the first time in my career, it is growing at a faster rate than essential complexity, and masking a lot of the gains we are getting from improved processes. Is this a fair observation, or am I just jaded?
======
alistproducer2
I work for a large company in the automotive space. I can say that most of the
accidental complexity I see is related to shoe horning solutions into
technologies that we happen to have licenses for as opposed to what is the
simplest and most robust solution.

The kind of over-engineering for the CV polishing almost always comes from our
college hires. It's not really their fault, as I understand the attraction to
the shiniest object when your a newb. When their monstrosities are allowed to
make it into production, however, is when you have a dev manager that doesn't
really understand software engineering.

For my part, I used to scream from the mountain tops that we were over-
engineering everything, but now I just let them fuck shit up cause I'm tired
of trying to save people from themselves.

------
_Codemonkeyism
Many companies have too many engineers (and some not enough). This is mainly
due to #engineers = f1(revenue || vc money) while it should be #engineers =
f2(problem). Mostly because our industry has no clue about f2 and so falls
back to f1.

But this also works with business thinking, e.g. CEO thinking

techbudget = 0.1 * revenue

With the law of the used budget (all people always use up their budget of fear
to not get the budget next year when they might need it), CTOs use the whole
budget

#engineers = techbudget/salary

so f1 is often

#engineers = f1(revenue) = (revenue * 0.X - hosting - laptops - licenses)/
salary

and with salary >> hosting, salary >> laptops, licenses -> 0 due to open
source usage,

f1(revenue) = revenue * 0.X / engineer salary

(the 0.X is determined by VC experience/push, negotiation between CEO and CTO,
how much of a tech company the CEO sees his company or from his previous
experience, most probably on a wholly different tech business model).

with no need to understand the tech challenge at hand.

For startups and high margin tech business, there is often a large tech
budget, so many engineers are hired (also other forces like capability
building, war of talents etc. - some startup CEO tell me their VC said they
need to hire 100 engineers until the end of the year - this is without knowing
anything about the tech problem at hand).

Coming to your point:

If there are too many engineers for the essential complexity they create
accidental complexity.

~~~
amadvance
A near optimal one is f2(x) = 5

:)

~~~
_Codemonkeyism
Most efficient is f2(x)=2 or f1(x)=1 I think, not sure yet :-)

~~~
passiveincomelg
I'd say 2 because code reviews are one of the rare "methodologies" that
improve quality in my experience.

------
Denzel
You'll enjoy these two videos from Alan Kay and Uncle Bob Martin,
respectively:

"Is it really 'Complex'? Or did we just make it 'Complicated'?" \-
[https://www.youtube.com/watch?v=ubaX1Smg6pY](https://www.youtube.com/watch?v=ubaX1Smg6pY)

"The Future of Programming" \-
[https://www.youtube.com/watch?v=ecIWPzGEbFc&list=PLcr1-V2ySv...](https://www.youtube.com/watch?v=ecIWPzGEbFc&list=PLcr1-V2ySv4Tf_xSLj2MbQZr78fUVQAua)

------
mikekchar
I honestly don't think it's any worse than it has been before. Back in my
youth it was "components". If you can encapsulate functionality, then you
should be able to compose it like tinker toys. So let's build these "reusable"
components which act like black boxes. We'll nail up the API to the wall, and
use some kind of versioning scheme to ensure that it's always backwards
compatible. We can even build frameworks that standardise the communication
between these black boxes and provide language agnostic object
representations. Hell, we can even build database interfaces around that
concept and pass business domain objects back and forth directly. And we can
hide the complexity of the object-relation mapping in another framework.

Yeah, that will simplify _everything_ :-). It only looks shiny because the
people doing it weren't around in the 90's to get sick of it the first time
around. Ah... who am I kidding? 4GLs have been around since the 70's... :-P

~~~
TimJYoung
Yes, but "components" as a concept _actually works_. Why do you think we keep
seeing attempt after attempt to bring the concept into the browser UI ?

Software development with components as building blocks maps well to our
mental model of how you build things and is extremely productive. Despite the
amount of grief that drag and drop GUI builders receive for being too simple,
not being explicit MVC, etc., you won't find a better way to allow new
developers to learn how to use a tool and be immediately productive with
smaller projects. The Delphi and Visual Studio IDEs had it 100% right in this
regard. That's why you see so many LOB applications from the late 90's, early
2000's still kicking around that used Delphi's VCL/Visual Studio's WinForms.
They got out of the way and allowed people with in-depth business knowledge to
create _actual solutions_. In the long run, isn't that the most important
measure ?

I will agree with you on ORMs, though. There's never been more of a case of
"putting a square peg in a round hole". Sometimes, the best you can do with
disparate systems is to just talk the language of the system that you're
interfacing with and stop trying to make it be something that it isn't. Put
another way: if your "avoidance" code ends up being 50% more complicated than
your "direct" code, and prevents the developer from understanding _how_ the
whole interaction occurs, then you've committed a grievous error in
architecture. There are ways to make such interfacing easier without making it
be completely different. For example, with SQL one of the most difficult
things to deal with is result set binding/retrieval, and this was solved
rather early on with classes/components that eliminated the complexity of
doing so with APIs like ODBC. We should have stopped there.

~~~
mikekchar
Yeah, I'm not really arguing that it doesn't work. There are definitely use
cases for it. The main thing is that the simplicity is a facade. Generally
it's achieved by limiting scope. But as the scope creeps upwards, the
complexity of the machinery starts to overtake the complexity of the problem.
The biggest issue is that there is a considerable amount of technical lock-in.
The solution is so specific that it becomes impossible to change your mind
later in the project. Techniques like refactoring are impossible because
you've already baked in the subsystem decomposition early, early in the
process. The whole thing becomes a brittle mess. But for certain problems (or
putting up a quick MVP that you might throw away later), it can be very
useful. And as you said, it can be an excellent way to build projects of
limited scope with people who are heavy on domain knowledge, but light on
programming expertise.

~~~
TimJYoung
There isn't anything intrinsic to the Visual Studio/WinForms or Delphi/VCL
environments that prevents you from refactoring an application into a more
structured manner as it grows. They simply don't shoehorn you into an
enterprise development model for applications that don't require it (or don't
require it initially). Thousands upon thousands of large business applications
have been written in both, so obviously they were perfectly manageable for
that purpose (2-tier, N-tier, web back-end). The point is to create reliable
software that satisfies your business needs. One company that I'm familiar
with built a beautiful application server/front-end UI in Delphi that uses SQL
Server as the database server and serves 160+ concurrent users every day. The
front-end UI was a ~16MB .exe, and the application server was a ~8MB .exe,
both with no dependencies other than using Windows Vista or higher as the
target OS. It was a very impressive application (I wish I had wrote it), and
served the ~$45mil company very well.

------
cjhanks
It sounds to me like you are in a software company with a lot of inexperience.

I have seen people spin up massive map-reduce clusters to perform something
AWK could do in half the time. And I have seen spend enormous amounts of time
training neural nets for something a simple SVM could do. In both cases (and
most similar cases I have seen) a lot of needless time and energy was
expended.

But why? This is my hypothesis anyways...

There seems to be two primary pools of people who feed into the software
engineering. Those from engineering disciplines and those from research
disciplines. Engineering schools can be fairly cut throat; limiting the number
of candidates who can move on at various stages, or designing courses to
reduce the number of enrolled students. Those from research disciplines are
required to create novel ideas which are worth publishing just to graduate
their programs. And those that actually _like_ academia believe (and
rightfully so) that creating new ideas is the way towards notoriety and
success.

That means a lot of the craftsmen who would prefer making tiny refinements on
well understood paradigms are simply weeded out from the discipline before
they can even enter it.

I have known very smart people who quit engineering in college because they
could simply not stand being in a room of people where everybody wanted to be
the "smartest in the room".

Of course... the incessant CV polishing in Silicon Valley probably doesn't
help either.

------
d--b
Well, a theory is that Agile is responsible for both. While its short
devlopment cycles help deal with essential complexity (you understand better
the customer's problem because you have more interaction with them), it also
implies incremental coding which often increases the accidental complexity.

As an example:

After a short development cycle, you created a database schema where each
Client class has a single address, and because it was done quickly, the
address fields are in the same Client table.

In a second development cycle, your customer tells you that their clients may
have one or more address and one of them is the primary. The quickest path to
delivery is to keep the address in the Client table as the primary address,
and add a secondary address table.

While short development cycles help in not screwing up large requirements, it
also impairs our ability to see the larger picture, and cause us to take
decisions that we need to undo afterwards.

~~~
beaconstudios
that sounds more like a symptom of forgetting to include refactoring in your
agile loop, than an inherent issue with small iterations.

~~~
d--b
Yes, in this tiny example, it could be avoided by adding a simple refactor in
your second iteration.

In general though, the customer may ask you some iteration that he doesn't
realize imply a complete rewrite of the code. Like:

"now I want the same thing, but I want it available 24/7 with no downtime".

Or "Now we have a great text editor, we want to add live collaboration".

"You know what would be great: that we could see what changed between 2
versions of the spreadsheet".

~~~
beaconstudios
true, and it's hard to work around unreasonable customer requirements. In the
ideal circumstance you would have that conversation and explain that having
100% uptime guaranteed is basically impossible and aiming for e.g. 6-sigma
uptime requires high ongoing costs - but I'm sure we've all experienced that
"I can have it both ways" attitude.

------
sureshn
I have noticed several developers deliberately introduce accidental complexity
for the sake of job security. Just to make it really really hard for a new
person to do any thing with it , they create a hostage situation where you
have to do a full gut of his or her work and redo it. This is more evident in
a situation where the teams are geographically distributed and the main team
(typically in USA) feels insecure that their jobs are going to India or some
other country where software maintenance is cheaper. The problem of accidental
complexity is very serious for large enterprise software companies who have
aging products typically on a Microsoft stack so those engineers who have
spent a decade coding and maintaining it tend to be happy with a crooked
way(hacks and workarounds - Undocumented of course) of doing things which
gives them an unfair advantage. To conclude; accidental complexity is a
political tool introduced by developers deliberately of reasons best known
only to them

------
molticrystal
It is easy to look at this from a functional and practical standpoint, take
any program in your stack, and look at what it does vs what you need it to do.

There is a parallel to this where people were trying to where people write c++
programs in visual studio realize they often can do so without needing the
microsoft visual studio runtime and its routines and their program drops from
68k to 2k and a dash of code to bring back what is missing in most cases, if
it is even needed. [https://hero.handmade.network/forums/code-
discussion/t/94](https://hero.handmade.network/forums/code-discussion/t/94)

I love looking at things and thinking about this, people in this field often
find themselves over-implementing(or using software that is a source of over-
implementation), setting up a tremendous amount of foundation to get a little
thing done, getting carried away, and finding out it got too involved and
over-engineered when there is a tight and simple solution that matches exactly
what is needed.

Of course there are situations where microservices are still needed:
[https://semaphoreci.com/blog/2017/03/21/cracking-monolith-
fo...](https://semaphoreci.com/blog/2017/03/21/cracking-monolith-forces-that-
call-for-microservices.html) Everything should be thought out and joining the
latest craze isn't necessary. If you really need it, you will find yourself
looking for or implementing the solution, regardless of what it is currently
called, microservices have been around in one form or another before the hype,
and used as needed when needed, instead of getting being used to wax a resume.

Your accidental vs essential comparison has real world monetary consequences.
There are many examples that while it might take 2-3 months to stop and think
and analyze what is going on instead of adding to the pile, you can save
100k-300k a month for even smaller scale deployments.

------
good_vibes
I agree 100%. I'm beginning to feel a lot of 'developers' live in a circle
jerk where they share about how awesome they are and other developers tell
them how awesome they are, unless it makes the other 'geniuses', what many
above-average intelligent people in my generation are told from a young age,
feel too inferior.

Hubris is a real observable phenomenon in the history of economic bubbles,
paradigm shifts in scientific/technologic progress, and in recent events in
media, politics, and business.

We need to start questioning business-as-usual as much as possible. Everything
can be simpler, less egocentric, and more beneficial to all parties.

------
nl
I've been working professionally in software for 20 years. Anecdotally, I'd
say that line-of-business software has never been more reliable, more
extensible or had better user interfaces than it does now.

------
Nomentatus
The _consequences_ of accidental complexity are now much greater because
everything's more intertwined, and the systems as a whole, larger. This puts
us in "Ghengis John" (John Boyd) territory, close to the point where we don't
have enough attention to fix all the problems that arise, including from the
fixes themselves. Probably you youngsters are actually better about not
introducing complexity needlessly, since the consequences of doing so are ever
more obvious.

------
donatj
I'm just getting to the point in my career where I've seen this happen a
couple times. I believe it's cyclical. OO came and complicated everything and
that died down. WYSIWYGs came and complicated everything, those have died
down. Huge frameworks came and complicated everything, and largely died down.
SPAs and Microservices came and will soon die down. I think every few years of
new blood introduces a critical mass of complexity.

~~~
dgellow
Edit: I just checked the meaning of 'died down' and it means 'to reduce in
strength' not 'to become literally dead'. So, my comment isn't really a
correct response to the parent but I keep it here.

Do you really think all those concepts died? IMO, they aren't dead, they have
been assimilated. We learned from OO and it's now a tool you can use almost
everywhere when you need it. But we also learned to not try to model
everything in an OO paradigm.

You can say the same thing for your other examples. You have a phase where the
concept/techno is being seen as the silver bullet that will save us all. That
brings a lot of adoption. People do mistakes. We (hopefully) learn from them.
The hype fades away, concepts are consolidated and assimilated in stable and
mainstream tools.

But that takes time. And I'm not sure how things could work differently. Don't
we need the hyper-hype phase to experiment and learn about the concept/tech?

------
holri
"... perfection is finally attained not when there is no longer anything to
add, but when there is no longer anything to take away ...", Antoine de Saint
Exupéry

This is true for requirements but also for used technology.

------
arikrak
There have been various improvements in programming that reduce accidental
complexity but they don't apply in every area. For example, Ruby on Rails
makes it easier to create an MVP, but scaling an application to millions or
billions of users will still require working on many layers of "accidental"
complexity (since scaling isn't an inherent part of solving the problem).
Running applications in the browser also adds "accidental" complexity since
certain details of the product now need to be defined in both the browser and
the server. Testing can also get more complicated as there are more layers and
services interacting with each other. It seems as hardware and software
improve, the demands on them increase as well. So we can't just relax and
enjoy a world of Essential complexity.

Meanwhile as machine learning gets more advanced, it's able to tackle certain
"essential complexity" problems that were supposed to always require hand-
coding by a human programmer. So there are trends working in many different
directions.

------
jnbiche
I really can't address the issue of overuse of microservices in a line of
business app type project, but I want to push back about little about "SPAs"
being "accidental complexity" here. I imagine by "SPA" you're referring to the
use of a JS framework like AngularJS, Backbone, EmberJS, React(and some type
of models), etc.

While I agree that sometimes there is no need for these in a simple CRUD app,
I'd also caution that depending on your basic needs (frontend validation,
autosave, keyboard shortcuts, etc) and frequently, on more complex client
requests (like for fast JS-driven form navigation that requires some sort of
frontend routing, or to save to some industry-specific file format, and such),
you can easily end up with a big spaghetti ball of JQuery if you're not
careful.

I've seen these balls of tangled JS all too often in line of business apps,
and they're a nightmare to maintain or modify. The point of using a framework
is to provide some sort of organization for frontend JS, and to provide common
features.

If your team has the maturity to organize their frontend JS into neat MVC/MVP
apps without the use of a JS framework, great. But in my experience, few
"full-stack" developers are comfortable enough and/or care enough about
frontend JS to do this. Thus the use of frameworks.

If you need a very minimal framework, BackboneJS is both tiny and simple to
use. Even Backbone can save simple apps like this a lot of grief.

The alternative to something like this is to stick to a full-stack framework
like Ruby or Django and stay strictly in that environment/best practices. But
even there you're bringing a lot of complexity into the picture, or sometimes
you still end up needing some sort of organizing principle or framework for
the frontend JS.

------
Alex3917
> A lot of it seems to be driven by a need for developers to keep their CVs
> shiny

This is immaturity, not a real need. The people who make the most money in
tech aren't the ones with the most buzzwords on their resumes.

~~~
nstart
I wonder about that. Right now, I can pick up pretty much whatever tech is
thrown at me. But any job demands that I have experience in this field or that
field which is pretty frustrating tbh. "You don't have experience in this
entire react stack? Bye!" oki... I'll look for backend dev? Oh right. Must be
fullstack and must know all the JS frameworks.

Maybe devops/sysadmin/whatchamacallit? "Must know this particular NoSQL DB,
and be able to administer Kubernetes clusters"

Fine I can kind of do that, but I have my doubts because I've never used said
DB. And it goes on.

It's a deeper argument than what I just made, but my extreme other side point
was just to highlight that the job market looks scary when you browse sites
and realise you've only worked with pure js and jquery and no one wants that
anymore. And then you see that they want people who've worked on real world
apps using all this new tech.

There is eithe a real need or at least the illusion of a real need and I
wouldn't discount that entirely. Somewhere in the middle, the answer probably
lies.

~~~
DenisM
People who post job ads where you can browse them usually don't understand the
job they are hiring for, hence the checklists and keyword screens. So you need
to find a way to get in touch with those on the inside who do know, and can
make decisions. Hacker news ads are one way, maybe look at stack overflow,
certainly look through your network for internal referrals, perhaps some meet
ups.

Also I think one should not use the keyword stuffed resume when going through
backchannels. Personally I discount them, and focus on resumes with a record
of getting things done in a related field.

------
api
It's always been a problem. Premature optimization may be the root of all
evil, yeah, yeah, but that quote is sort of a waste of breath as premature
optimization is not that big of a problem. Over engineering is the great
disease of the software profession.

I suppose premature optimization is one source for it, but more common is
premature generalization and excessive levels of abstraction. Other major
sources include backward compatibility needs, the creation of virtual layers
to escape calcified bad design, and of course the need to show off and look
smart.

~~~
rhizome
I'd say overengineering and premature optimization are nearly synonyms.

~~~
jjoonathan
They're both bad, of course, but beyond that I'd say they're nearly antonyms.
Premature optimization trades structure for performance, overengineering
trades performance for structure.

You could argue for equivalence under a broader definition of optimization and
engineering, but in the context of software engineering the more specific
definitions used above seem to be fairly well established.

~~~
rhizome
_Premature optimization trades structure for performance, overengineering
trades performance for structure._

This makes them the converse of each other, but not antonyms. At any rate, I'm
not sure I agree those terms are exclusive to this relation. There can be fast
overengineering and slow PO.

------
_Codemonkeyism
Yes. This is one one of the most prevalent problems in engineering I see when
consulting for startups.

------
rtpg
Are microservices more complex or less complex?

there's more operational complexity, because now you're running two things
instead of one. But there's also less complexity in other areas, because parts
of your code are firewalled off from each other, so you have a smaller problem
space when making changes/fixing bugs.

I think that there's a tendency to view things that are harder to set up
initially as more complex, when the long-term complexity might be lower. It
mainly depends on what you think is important.

I think a lot of the difference in viewing complexity is due to difference in
foundational knowledge. People doing Haskell see monads for DSLs as nice and
easy abstraction, but people outside see it as a crutch, needed because of a
lack of native effects in the language.

Some people see Javascript toolchains as needlessly complex, others see people
using Javascript instead of <some compile to JS language with better
abstractions> as needlessly complex.

Differentiating between what kind of complexity is brought is important.
Because a team of people bad at operations shouldn't be rolling out a new
service every week. But a team good at operations but bad at separation of
concerns might gain a lot from being "forced" to chunk things out.

~~~
deathanatos
> _Are microservices more complex or less complex?_

I've long argued that microservices are a great way to reduce complexity;
however, I had one project that I was involved in where microservices
definitely _added_ complexity. You get it right that,

> _less complexity in other areas, because parts of your code are firewalled
> off from each other, so you have a smaller problem space when making changes
> /fixing bugs._

This is exactly right, and is exactly how microservices _should_ make things
less complex: by limiting the surface area that needs to be inspected when
issues arise.

Where microservice I was thinking of fell short (IMO), is that it received
JSON structures on the wire, and essentially, those Mappings and Lists made it
all the way down to the lowest layers of the code. Deep in the heart of the
service, Mappings and Lists are constructed, bubbled all the way back out
towards the HTTP handlers (being mutated all the way) and then finally
serialized and put on the wire as JSON.

Obviously (or, at least, I hope) this is not good. At any point, all I know is
that I have a Mapping or a List; I have no real idea what the semantic
business _type_ of the thing is. (Unless it has a good variable name.) No
validation (aside from it being JSON) is done, so if a structure is not well
formed, that error isn't detected until such a time as it matters to the code
(b/c something expects a String, and it isn't, for example), and everything
grinds to a halt.

Couple a few of these together, and a badly formed JSON substructure can wind
its way through _multiple_ services. This can make debugging hard as you can't
attach to a single process and follow execution from inception to failure, and
the problem now spans potentially multiple machines.

So, this makes me wonder if this is where all the vitrol for microservices
comes from (I've never really heard a good argument against it) or if it's
just a strawman. The above, of course, is a failure not of microservices, but
of mostly input validation and an unwillingness to strongly type your
parameters. (Mappings of Lists of Mappings of etc. count as "stringly typed"
in my book.)

------
tabeth
In regards to the SPAs, what exactly is the alternative?

In my mind you have two possibilities:

1\. The most ideal is to "pick the right solution for the problem." Meaning,
you will analyze the problem and do the right thing, e.g. content mainly?
server side rendering. web application? SPA.

2\. Do what you know, consistently. In many cases, this means using an SPA
basically all of the time, whether or not it's necessary. This also might mean
server side rendering all of the time, adding javascript only when absolutely
necessary.

In regards to SPAs, I feel there must be a solution. Ember tries to do this
with fastboot, allowing you to essentially have the flexibility of an SPA with
the SEO and other advantages of serve side rendering, but I'll ignore that for
now.

Is there a paradigm/language that allows you to write _the same_ templates for
both the server side and SPA view? I guess this would mean the view is the
same, and the model and controller would be your choice.

I think it's natural to just do what you know, which is why you're probably
seeing what you are.

~~~
aurelianito
> Is there a paradigm/language that allows you to write the same templates for
> both the server side and SPA view?

That's the main motivation behind my web framework Sandro. It is just an
experiment, but I do just that!

See
[https://bitbucket.org/aurelito/sandro](https://bitbucket.org/aurelito/sandro)
and [https://bitbucket.org/aurelito/sandro-
lib](https://bitbucket.org/aurelito/sandro-lib)

It uses rhino and domino to render server side using the DOM API. I am busy
with work that pays my bills, but I would love if I were able to work full-
time in it ;).

~~~
daliwali
Server side rendering can be done with any JS framework, though often one must
compromise performance by using a more heavyweight implementation like jsdom,
or some custom implementation if the framework provides it.

Also I have done something similar using domino:
[https://simulacra.js.org/#server-side-
rendering](https://simulacra.js.org/#server-side-rendering)

------
rbosinger
I believe it's a little bit of "column A" and a little bit of "column B".

On one hand it can feel maddening that a generation (or two) of young IT
professionals have seemingly gone full-circle and tried all kinds of
techniques only to end up leaning on tried-and-true technologies of the past
(think: NoSQL vs ACID compliant relational DB's, Ruby/Python vs Elixir/Erlang,
Flash/Flex - XML based design vs React/Javascript/HTML5/Canvas, Dynamic vs
Strict typing, etc).

There has been a lot of re-inventing.

However, I think this had to happen. Many great ideas of the past were amazing
ideas but they simply were before their time, not quite implemented well
enough, we're too heavy/complex for the hardware at the time, and so-on.

Software has been moving too fast for the collective mind to keep up.

Maybe, in an ideal world, we would have methodically built up on the best of
software from the 60's without wasting time and effort experimenting with new
ideas. Honestly, that might have been awesome for productivity and the
advancement of humanity as a whole. That's not how we seem to function as
people, though. I can't help but think of "space travel" here (it sure didn't
carry on exponentially from the moon landing ~50 years ago).

In my mind I believe this always happens with technology of any kind. It may
be why we're discovering that ancient civilizations probably had much more
interesting technology than we initially imagined. They simply has certain
cycles where they "really hit a sweet spot" and other periods where it all
became contrived.

On that note, if anyone has any suggestions for books about this subject
please let me know. I know it's not an original thought and I'd bet there's
been some interesting research.

------
kevan
The software field is still very young, and there's lots of things we haven't
explored yet. A great example of this exploration is the ongoing evolution of
javascript from simple scripts to SPAs and a universal compile target. It
seems like a lot of effort is being wasted (because it is), but it's similar
to startups in the economy. Most efforts fail, but enough succeed that we end
up in a better position overall.

The field may hit a point where there's an obvious standard way to build
things, (e.g. most houses in the US are wood-framed with drywall because it's
cheap an durable enough for most use cases), but this won't happen for a long
time. We've had thousands of years to figure out how to build houses and
roads, but software really only started in the second half of the last
century.

~~~
ern
I'm all for learning and experimenting, but is it ethical to do it at the
expense of paying customers (including employers) on business critical
systems, without their explicit and informed consent?

------
jnbiche
Can you please cite some examples of this accidental complexity? I'm very
curious to hear, if for nothing else than to avoid it if I agree that it's
indeed accidental.

I do agree that because of the "GitHub resume" phenomenon, lots of devs are
engaging in the type of engineering you describe.

~~~
ams6110
OP mentions line of business apps. I may be presumptuous but that to me says
(broadly) data entry forms and reporting.

Anything beyond HTML forms, a SQL database, and maybe a bit of Javascript and
CSS is likely to be more complexity than you need for something like this.

~~~
_Codemonkeyism
Most companies I see, treat their problems as rocket engineering when in
reality their startup is just a simple database frontend.

------
dcosson
> tamed through more responsive (agile) dev practices

Sure, for a given project, you can make it more or less complex by taking
different approaches. But there's some lower bound on essential complexity and
when the product itself requires a lot of features that interact in complex
ways, your essential complexity is going to be high. And once the complexity
gets too high, every new feature you add starts making future features even
harder & slower to add. (A few caveats: 1. You can of course argue that it
doesn't really "need" all those features, and everyone would be better off by
sacrificing a little on the user experience to make it a lot less complex,
less buggy, cheaper to build/maintain, etc. 2. By the time you run into these
issues, the program is large enough that some accidental complexity has snuck
in, and it can be hard to estimate how much of your problem is from that vs
how much is from the essential complexity.)

Anyway, If a project never gets to that size where the essential complexity
becomes unmanageable, then great. I'd agree it's probably a mistake to
introduce extra complexity with microservices or anything else, and certainly
some people make this mistake (or miscalculate whether their project will stay
small and simple).

But if it does get to that point, some of these solutions you mention like
microservices can be an independent, constant complexity cost. Ideally each
microservice itself is a small app with manageable complexity. On the other
hand, a naive, monolithic app will continue to have its essential complexity
grow super-linearly with variables like time, lines of code, number of
developers, etc. The bet is that by introducing the new thing, you can bring
the overall complexity of the entire system down.

So my short answer would be "no" \- in many large projects, essential
complexity is growing faster than accidental complexity. This has led to a
proliferation of tools to help bring it under control, and yes, if you look at
these tools in isolation they are pretty complex.

(And there are hundreds of billions of dollars worth of companies using these
tools, if it is all just a waste of time then there's a massive business
opportunity waiting for someone to undercut them)

------
mpweiher
Yes.

Sort of.

One of the reasons for this is actually the success of both OOP and COTS/FOTS
to provide us with workable reuse (just as Brooks predicted, by the way). Just
about every GUI program reuses huge libraries of graphics and UI code, usually
provided by the OS vendor. Every Rails app reuses significant functionality.
We scoff at the huge dependency lists of programs, yet in some sense this is a
sign of success: we no longer _have_ to rewrite all of this functionality from
scratch like we used to.

However, we now have to glue all of this stuff together. With glue code. Lots
and lots of glue code. Which is by definition accidental complexity. And which
appears to be growing non-linearly.

So as is usually the case, our success in the last iteration (OOP/COTS) gets
is to a new stage, and at this stage we face new problems that are a result of
our previous success. In this case, I'd venture that the problem is that we
don't really have a good approach to gluing these pieces together. Yes, we can
do it, but we don't know how to do it well.

I think John Hughes in _Why Functional Programming Matters_ [1][2] hit the
nail on the head when he said that we need new kinds of glue, and where FP has
been successful I think it is largely because it provides one new type of
glue: function composition. (He says two, but whether pervasive lazy
evaluation is actually a good kind of glue is at best controversial).

 _Architectural Mismatch: Why Reuse is [still] so hard_ [3](1995)[4](2009)
shows the general problem while _Programs = Data + Algorithms + Architecture:
Consequences for Interactive Software Engineering_ shows that the problem is
particularly pernicious in GUI programs, which are notoriously tricky.

For me, the solution is to make it possible to (a) use different types of glue
within a program, (b) define your own kinds of 1st class glue and (c) adapt
existing glue to your needs. I've built a language to enable this[6] and have
been applying the ideas both in research and practice. So far it appears to be
working.

[1]
[http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html](http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html)

[2] [https://www.infoq.com/interviews/john-hughes-
fp/](https://www.infoq.com/interviews/john-hughes-fp/)

[3]
[http://www.cs.cmu.edu/afs/cs.cmu.edu/project/able/www/paper_...](http://www.cs.cmu.edu/afs/cs.cmu.edu/project/able/www/paper_abstracts/archmismatch-
icse17.html)

[4]
[http://repository.upenn.edu/cgi/viewcontent.cgi?article=1074...](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1074&context=library_papers)

[5] [https://www.semanticscholar.org/paper/Programs-Data-
Algorith...](https://www.semanticscholar.org/paper/Programs-Data-Algorithms-
Architecture-Consequences-Chatty/ffa0b7c7182fd7f0577d774a4778a950abfec1cb)

[6] [http://objective.st](http://objective.st)

------
nstart
Context intro to my answer: I'm come from a service economy. Much of what I've
seen is from companies that go out there, get clients from various types of
businesses - construction to finance - and create custom solutions for them
that are then maintained by the same service companies. I've also seen a few
product companies and I have worked with one for a year.

\---

After reflecting on this question, I feel like a good way to think about it
might be this:

Companies are hitting extreme points right now. On the one hand you have
companies that have stuck to their way of working for years. It worked, and in
the process of getting customers and making the money come in, they never
really thought to upgrade the company processes. After all, back in the day,
tech moved at a much slower pace and that's what they are used to. There's
been little to no incremental improvement in the technology or even the
practices that have been used throughout the companies life. These types of
companies still use FoxPro for ERP's, much much older Java enterprise tech
stacks (and practices) for their server side stuff. I don't say that that's
bad mind you. The ERP company has been around for over 25 years.

On the other hand you have the fresh companies started by people who really
don't want to enter the juggernauts of the market. The establishment. Guided
by the excitment that reaches us from Facebook listicles, and Techcrunch, they
want to ride their own paths and build a fresh future. They want to go in and
show businesses that the establishment is giving them "boring" and that they
can do it better. I've met with lots of these people as well. Generally
they'll start explaining their business by start from their tech stack. "We
are a SME ERP business using react and a full JS based stack". That is not an
exaggeration. That is near verbatim.

In between these companies you have the graduates who need to pick a side to
go work in. And those that are entering the establishment, want to make their
mark. Their impact. They walk into a company that is using SVN for their
source code management, and they groan. They see Java being used and they say
"why not NodeJS?". And what I've seen that happens, is that they run into the
people of the establishment who have no interest in upgrading. Instead of
having mentors who work with them towards incremental improvements where each
improvement is justified by developer productivity and improvements to the
customer they hear "what we have is good enough. Forget it". Or the more
common "too much work. Don't bother". As a personal note, the latter really
bugs me. Of course it's too much work. You've set it up to be too much work.
But, what happens then is that some people will become the establishment,
while the others will bide their time waiting for a greenfield project to come
and for them to be given a PM role (mind you I see PM roles being handed out
to people with almost no technical knowledge, little ability to evaluate tech
or specs, after being in a company for 1.5 years). And when they get it,
"LET'S DO NODE JS!!!!".

Oh and what of those graduates who left college to join the "entrepreneurs"?.
They too have gone through university being taught web app development using
"ASP MVC" and "Microsoft SQL" and they long to be let out into the world to
play with the tech that they hear their peers are working with around the
world. Admittedly, the Open source world of react and angular and what not is
super exciting in terms of pace of announcements these days. And then they
join their peers and everyone gets to feel excited that they are working with
new things because they believe new must mean better.

\---

Ultimately the answer is yes. Accidental complexity is growing faster than
essential complexity. Business practices change much slower than tech in
today's world. I also feel like understanding the reasons behind it is worth
pondering on.

For me, I feel that a lack of mentorship has a lot to do with things. There
are far too few Uncle Bob's - veterans of the software industry who've kept
pace with the change and understand things with a deep historical context -
who lead architectural decisions and project management at a company. It
doesn't even have to be Uncle Bob level veterans. But my observation of
companies from where I am shows me very clearly that after 3-5 years of being
in the industry, the deeper level constant learning vanishes. Which means that
the older employees and the relatively new employees have stopped growing and
the mentorship they provide is based on a tiny amount of work that they've
done at some point in their life.

So to recap, we have people who've come out into the industry, fascinated by
the flashy stuff (which is fine! That kind of youthful enthusiasm is also
needed in the eco system). These people quickly make their way up to the
management level within a couple of years, and then get to push the flashy
stuff, and it doesn't go beyond there. Understanding the balance between tech
decisions and what's required to solve the problem efficiently for the
customer doesn't happen. And then these people guide the next generation which
multiplies the effect.

From my POV and from where I live, this is the service industry today.
Accidental complexity is a growing beast heading towards an exponential curve.

------
trowaway8u6
I get your drift, mostly, but I'd say accidental complexity comes from forces
more powerful than technical geekery: money.

I work at a fairly well known startup that has raised many tens of millions
from VCs. Having raised this money we are now fully expected to hit quarterly
numbers. In this pursuit we often end up adding random shit to the product or
doing extra non-core shit to get a deal done. Product roadmap be damned.

Unless you have a principled and powerful (dictatorial, even) leader who can
say "no, we should stick to our vision rather than introduce new tech debt"
and enforce it through product and engineering, the accidental complexity will
continue to pile on as selfish quota-carriers eat up all resources available
to them.

