
Yes Silver Bullet (2019) - wheresvic3
https://blog.ploeh.dk/2019/07/01/yes-silver-bullet/
======
pron
Here's the problem, though. Brooks made a certain observation about accidental
vs. essential complexity; the author of this post makes another. Brooks made a
certain prediction based on his observation (diminishing returns in impact to
productivity), and many believers in the power of programming languages found
his prediction to be too pessimistic at the time (he lists their objections in
"No Silver Bullet, Refired"). Only it turned out that his prediction was
correct, except it was too _optimistic_. So if anyone wants to claim that his
observation was wrong, they would need to explain why his prediction turned
out to be true, while that of those who believed it was wrong turned out
false.

In addition, I agree with the author that the biggest contributions to
productivity we have seen were, in this order, the availability of open-source
libraries and online forums, automated testing, and garbage collection -- all
of which have been adopted at rates we'd expect from adaptive traits in an
environment with selective pressure. What is conspicuously missing is
linguistic features, also in line with Brooks's observation. And yet the
author still claims that linguistic features are the silver bullet. At this
point this qualifies as an extraordinary claim that requires extraordinary
evidence, but I would settle for ordinary evidence, which is also lacking --
strange, considering that a silver bullet, i.e. something with an extreme
bottom-line impact, should be easily observable.

~~~
Certhas
In your first paragraph you say that Brooks was right in his prediction. In
your second you say you agree with the author that "open-source libraries and
online forums, automated testing, and garbage collection" were the biggest
gains. But those first two were the counter-examples to Brooks according to
the author.

So where do you actually disagree with the author? You don't think the gains
from these were substantial enough?

Also the author makes a clear argument for why functional programming is, from
a complexity perspective, something entirely different from mere "linguistic
features". And a "silver bullet" according to Brooks is something that might
take a decade to show full impact.

~~~
pron
> But those first two were the counter-examples to Brooks according to the
> author.

They are not.

> So where do you actually disagree with the author? You don't think the gains
> from these were substantial enough?

I think that 1. the collective gains might have been substantial but not 10x,
and 2. the very ideas proposed by the author as being a silver bullet are not
even among those in the first group.

> Also the author makes a clear argument for why functional programming is,
> from a complexity perspective, something entirely different from mere
> "linguistic features". And a "silver bullet" according to Brooks is
> something that might take a decade to show full impact.

When I was in university, Haskell was all the rage. People were saying how in
some years we'll all be programming in it or in a similar language. That was
in 1998 or 1999. Functional programming in general has been well known, and
taught for many decades. I'm not saying it's particularly bad, and maybe it
could even have a small positive effect, but saying it's a 10x silver bullet
at this point sounds delusional. If Brooks was wrong and there is a silver
bullet, I very much doubt it's something we've known about, and tried again
and again, in various forms, for decades.

------
benjaminjosephw
This is a really interesting take on the Silver Bullet argument - Brooks is
right that big gains in productivity are only gained by reducing accidental
complexity but he was wrong about how much accidental complexity there still
is to reduce.

I think most Software Engineers probably suffer from some cognitive bias in
trying to estimate exactly how much accidental complexity exists that could be
eventually removed. We tend to think about incremental improvements to tools
and processes rather than thinking at a higher level about improving the
overall process of translating business requirements into working software.
Even with much better tooling and languages (i.e. #F, Git, AWS, etc) there's
still a lot of fat that could be trimmed.

I'm excited to see what the wold of low-code will do to our current
assumptions about how much accidental complexity there actually is. Maybe
projects like Dark[1] could actually achieve the order-of-magnitude gains that
Brooks was convinced wouldn't be possible. Sure, there's no panacea, but maybe
we're round the corner from a genuine "Silver Bullet" in the sense that Brooks
actually meant.

[1] - [https://medium.com/darklang/the-design-of-
dark-59f5d38e52d2](https://medium.com/darklang/the-design-of-
dark-59f5d38e52d2)

~~~
chrisweekly
Along the lines of your "no code" reference, I'm enamored of the potential for
well-architected and -implemented design systems and their component libraries
to virtually eliminate the need for non-experts to write CSS in constructing
good, maintainable UI. That's a profoundly impactful change vs the status quo.

------
bcrosby95
I'm in Brooks' camp. I see mostly essential complexity. There are problems
we've been pounding on for a decade that have solutions that are mostly
essential complexity. Then people come in and create pointless requirements
that turn that fairly simple essential complexity... extremely complex.

I think the modern problem isn't accidental complexity, it's essential
complexity that isn't actually necessary for project success.

~~~
coldtea
> _There are problems we 've been pounding on for a decade that have solutions
> that are mostly essential complexity. Then people come in and create
> pointless requirements that turn that fairly simple essential complexity...
> extremely complex. I think the modern problem isn't accidental complexity,
> it's essential complexity that isn't actually necessary for project
> success._

That would be a third category of complexity.

Essential complexity Accidental complexity

and the a new one: Complexity-imposed-as-"essential"-by-marketing-customer-
etc.

Let's call it Requirements-Complexity

~~~
wool_gather
In the context of any given project, that's, if anything, a subcategory of
Essential. The software is a tool to do something, and the functional
requirements are what define the purpose of embarking on the project. You
can't escape the business logic that needs to be enabled. It's like the old
service industry joke "this job would be great...if it weren't for all the
customers". There would _be no software_ if there weren't a set of
requirements.

~~~
coldtea
> _In the context of any given project, that 's, if anything, a subcategory of
> Essential. The software is a tool to do something, and the functional
> requirements are what define the purpose of embarking on the project._

It's not that clear cut.

For me "Essential" is what captures the essence of the problem domain and the
requirements in actual use.

On top of that, there are lots of "requirements" that are there due to bribes,
idiotic "brainstorming" where everyone felt obliged to chime in, what the
CEO's spouse things should certainly be there, recent fads, over-thinking it
from some execs, etc.

The distinction between the "essential" is of utmost importance.

First, because these "required but not essential" area is where an engineer
can look for tradeoffs (to the benefit of essential areas).

Second, because those items might hamper the essential functionality (like a
requirement to build a website in C, because the CEO heard "it's the fastest
language", can hurt security, maintainability and other aspects, or the
requirement to make a banking webpage with small fancy grey on white fonts
because they though the design is "cool" hurts readability).

Third, because engineers should not just operate on a "follow orders" basis.
They also have an ethical responsibility towards the final users (society at
large for public software, customers, the customer's employees, etc).

And, of course, because not being able to make that distinction, means that
the engineer can't tell good from bad, useful for useless, and it's all the
same to them. Whether they can convince the customer it's another thing. But
engineers should absolutely be able to make the distinction.

> _There would be no software if there weren 't a set of requirements._

Which is neither here, nor there. There would also be a hell of a lot of
better, faster, more reliable software, if bogus requirements would get
pointed out, and removed.

------
goto11
Brooks defines a "silver bullet" as a single technology or tool which yields a
10 times increase in productivity across the board. In other words, a "silver
bullet" have to eliminate _90%_ of all the work a developer does during a
workday.

But a typical developer is not even coding half of the time! They are
discussing with the product owner to clarify requirements, reading
specifications, writing specifications, thinking, surfing Jira etc. A _lot_
the work of a developer is to take vague requirements and transform them into
unambiguous language. These are effects of the inherent complexity of software
development. And then we have stuff like researching and evaluating what
framework or library is the best fit for a given task.

Haskell is cool and all, but it will not eliminate 90% of what a developer
does in a day. No single programming language, however perfect, will.

------
nayuki
The section "Accidental complexity abounds" and the diagram is reminiscent of
a Twitter thread last month:

[https://twitter.com/gravislizard/status/927593460642615296](https://twitter.com/gravislizard/status/927593460642615296)
"almost everything on computers is perceptually slower than it was in 1983" ;
[https://news.ycombinator.com/item?id=21831931](https://news.ycombinator.com/item?id=21831931)

~~~
kbenson
I disliked that twitter thread. It's rife with rose-colored views of the past,
and misunderstanding of what they are referring to. The two main ones are a)
library computers weren't always fast, and they definitely weren't checking as
much data. We're a few orders of magnitude higher in what things are
searching, and Amazon is _still_ really fast, and b) Google maps is a very
specific use case of mapping, not analogous the general purpose map you would
get previously. There are plenty of online trip planning services that do most
or all of the interesting stuff that was mentioned. I've used a few of them.

The whole thread is analogous to a person that grew up on a farm complaining
that their Honda civic doesn't deal with rough terrains as well as the tractor
on the farm they grew up on did, to which the only response is either _duh_ or
_no shit sherlock_ , depending on how charitable you're feeling.

------
haddr
I think Brooks is still right and what author mentions is not one thing but a
set of practices that became proven enough that became almost like a standard
approach in software development. And they provide some productivity gains if
we compare how software was built 50 years ago. Yes, the difference is big but
we forget that all those things were introduced incrementally and in the
course of decades. So yes, together they make a huge difference but they
haven’t appeared in one day.

~~~
rcthompson
Git and Mercurial were both originally developed over the course of a month or
so. If DVCS really results in an order-of-magnitude improvement (which do I
find credible), then I think it qualifies as a silver bullet by Brook's
definition.

~~~
OrangeMango
Of all Git users, only an extremely small percentage use Git as a DVCS (Linux
kernel, etc). GitHub is a centralized VCS. In fact, I think you could argue
that the emergence of GitHub is a demonstration that DVCS is nearly useless
for the vast majority of software development.

~~~
anticodon
I think you don't understand what makes centralized VCS different from
distributed. GitHub is just another clone of the repo.

For example, I can commit in my Git clone, can create a new branch, merge a
branch, push it to the server, all without GitHub server being available. If
GitHub would be down or closed tomorrow, this would not significantly affect
my work on the repository.

This wasn't possible using centralized VCSs.

~~~
OrangeMango
You can do all of this because Git repos are self-contained complete
historical records.

You could do this with a traditional VCS as long as you took frequent backups.
Git is an improvement in this regard, but not a revolution.

Furthermore, you are describing a workflow that is inconsistent from many
"industry best practice" recommendations. If GitHub went down, a very large
number of Git users would not be able to run tests or deploy their code to
production - their CI/CD pipelines don't work without GitHub. Their historical
record of issues goes away when GitHub goes down, etc.

~~~
anticodon
With traditional VCS, I would lose all the history and ability to work (to
commit my changes, for example) if server goes down.

And I understand that issues and CI/CD will also stop working, but neither of
that is part of the VCS itself. I'm not aware of distributed issues (maybe
FOSSIL) and CI/CD.

Still it doesn't mean that GitHub makes Git centralized.

------
mikece
The difference between accidental and essential complexity cannot be
underscored enough. Removing accidental complexity, even if it flies in the
face of "best practices," can be very powerful and appropriate as long as it's
understood where you are intentionally coloring outside the lines and the
conditions under which you'll need to revert to best practices for scaling up.

An example: I attended a conference presentation in which the presenter
discussed dissecting the implementation of Identity Server into a dozen sub-
services with half a dozen backing data stores with Redis caching layers and
an Event Source model for propagating inserts and updates of the underlying
data. This would be a prime example of accidental complexity gone wild if you
built this just to have a multi-container microservice -- unless your single-
signon service as a whole needs to support 100MM+ users, in which case this is
_essential_ complexity, not accidental.

Reducing accidental complexity but being mindful of how it could become
essential complexity under certain conditions in the future is the mark of a
wise architect, IMO.

------
Koshkin
The progress in tooling and methodology is paralleled by ever increasing
complexity of software and problems it is solving. It's a zero-sum game. Still
no silver bullet.

~~~
rcthompson
If a 10x improvement in productivity is paralleled by a 10x increase in
expectations for the complexity of problems we can solve, that's not a zero-
sum game, that's a silver bullet.

------
mkarliner
I think the points about accidental vs essential complexity are well made.
However...

The title/s are about Silver Bullets, and I strongly do not believe in them.

I've been programming for an embarrassing number of years, and I've seen
methodologies come and go. Increasing productivity is about developing a good
team culture with a small number of simple tools, not about following the
latest Silver Bullet.

In fact, it's very easy to increase the level of accidental complexity by
adopting an overly prescriptive or complex tool/methodology (Jira anyone?).

~~~
davedx
Right there with you! I've also been programming a while now, and the biggest
improvements to productivity, code quality, and overall success are indeed
building a good team and keeping things simple. That hasn't changed since the
80's/90's when I first started out.

It's also true that overly prescriptive tooling can be such a pain. This also
includes automated testing in my opinion (something the author counts as a big
advance). It's fantastic for certain classes of software code and terrible for
others. I've seen half my team spend multiple days just fixing integration or
e2e tests instead of working on features that add value. You can grind to a
halt with this stuff just as much as you can grind to a halt because of so
many bugs to fix...

You have to find the balance, and I think that's where senior engineers really
add value -- they've been round the block a few times and have a better idea
where that balance lies.

------
daotoad
The only way we'll see real 10x improvements take hold is for our discipline
to mature. As long as we are experiencing high rates of growth, where all it
takes is 3-5 years to become "senior", while the voices of the real senior
developers are swamped by hordes of newer devs, we will wallow in inefficiency
and fail to improve much.

Most of the accidental complexity I see day to day comes from slow
accumulation of errors in craft. Need to validate that an ID is a least
plausibly valid, "I'll just match it with a regex here", and in 27 other
places in the code, each one added in a different sprint. Little things like
referring to data that is passed into an api with the variable name "json"
when it is really a data-structure that was parsed from the body of an API
request. Is this thing I am manipulating a "tab", a "filter", a "type
selector" or a "group"? When methods and variables use a mix of conflicting
names for the same concept.

Until we learn, as a profession, that shipping working code is not enough,
that our code must clearly, concisely, and consistently communicate the core
concepts embodied in the program, accidental complexity will bury us.

------
JackRabbitSlim
I would argue that a good chunk programming tools don't reduce complexity at
all. Just shifts it to well understood "accidental" complexity.

Your program needs a way to store and retrieve key:value pairs. It's an
inherent complexity your program requires. You solve it by using an industry
standard product like Redis. Now your code doesn't contain any key:value logic
itself! complexity avoided right? Nope just shifted and doubled or trippled in
size to boot. No KV logic but now you have to have db libraries, parameter
parsers and sanitizers, sockets between services, etc, etc, so on, and so
forth.

It's well trod ground, your IDE will generate 90% of _that_ type of code for
you. you throw a library or template or whatever at it and go about your day.

Software isn't just _your_ code. Now your application that needed K:V storage
has the entire bug and exploit surface area of your code _and_ Redis. Do this
20, 30, 100 times with other libraries and packages. Each time _your_ code
gets a little smaller, a little slicker looking and the actual running
software is an unholy monster spread across an entire server cluster. Oh well,
_your_ code is clean and tidy.

------
rsclient
What I dislike about all Haskell examples on every blog is that the writer
happily assume that I'm familiar enough with Haskell syntax to understand the
code.

In the ordinary method, there's a function name, it takes one input, and
returns a result; easy-peasy. Yse, what the method does isn't clear, but at
least I know the methods name!

Now lets look at the Haskel fuction. tryAccept :: Int -> Reservation -> MaybeT
ReservationsProgram Int

I've read enough of these Haskel enthusiast blogs to know that the name of the
function is tryAccept (it helps that it's the same as the C# method), and it
takes in two parameters, an int and a Reservation and returns a Mayb of a
ReservationProgram. And it's somehow an int, so it's like an enum? That's
weird, clearly I don't understand Hasket syntax.

And then later, I'm told that it's simple to figure out what a
ReservationsProgram is because there's some enum that doesn't include the word
"ReservationProgram" in it.

No offence, but this isn't enticing me to Haskel.

------
skywhopper
So, the author here is arguing that between ready reference/help on the web
and TDD, that programmers today can be _100x_ more productive than they were
in the 80s? Add a dash of strongly-typed functional programming and we can be
1000x more productive?

I agree that these tools can be productivity enhancers, but if that factor is
even 10x that would a huge claim, much less 1000x.

~~~
quonn
Especially if we break this down to working days. There are just about 200-250
per year. Could you really build a software that used to take 4 years to
develop in a single day now? Even 100x would mean a years work in three days.

------
ncmncm
It's a shame that the author blows his credibility asserting that garbage
collection reduces accidental complexity, because he is right that the amount
of the latter really has been much larger than Brooks estimated.

If you have ever watched the hoops somebody jumped through to plug a memory
leak in a Java program, you will be forced to agree.

A fundamentally very similar process appears as a functional programmer
unravels a program to fix a performance bottleneck. The reason it is similar
is that, just as ballooning memory usage indicates a memory leak, a ballooning
runtime indicates a time leak.

And, just as GC languages provide inadequate facilities to manage resource
use, functional languages provide minimal resources to manage time use.

You could argue that memory and time management are on the accidental side of
the ledger, and while the scientist in us would be tempted to agree, the
engineer knows that managing limited resources is the essence of engineering,
and taking away the tools needed to manage resources makes resource problems
balloon out of control, breeding extremes of accidental complexity.

~~~
commandlinefan
> If you have ever watched the hoops somebody jumped through to plug a memory
> leak in a Java program, you will be forced to agree.

In my observation, a lot of “improvements” in software development are of this
kind: they make it quicker and easier to get started, but make troubleshooting
a lot more complex specifically because they’ve hidden what’s actually going
on in an attempt to simplify things. Not that managed memory is a bad thing!
But ideally it would be wielded by people who’d spent a fair amount of time
manually managing memory so that they know what to look for.

~~~
ncmncm
Managed memory is more of a superfluous thing. When you have resource
management, memory is among the easier to manage, and GC looks like a solution
without a problem.

------
pjmorris
This might be just me, but Brooks explicitly put a decade time tag on his 1986
prediction, complicating the authors's comparisons from ~three decades later.
I guess the way to tell would be to have people or teams build examples of
1986 level-of-complexity software and see how long it takes them.

~~~
rcthompson
That's fine for Brooks, but the OP is entirely correct that people still
routinely cite the "no silver bullet" doctrine 3 decades later. So it's still
worth revisiting even if it's technically 2 decades past its expiration date.

~~~
ticmasta
Brooks revisits NSB 20 years after the fact and makes pretty strong arguments
that specific intervening technologies have not disproved his original
conclusions. I think we could do the same today.

The author of this post conflates better with a 10x improvement from a few
anecdotal experiences that are themselves extremely accidental to the act of
creating software. He may have a point about the large volume of accidental
improvement areas remaining but I think this is because like the JS ecosystem
they are growing faster than we solve them.

I think the reality is likely the author is themselves a 10x improvement over
their decade-past self.

~~~
rcthompson
> Brooks revisits NSB 20 years after the fact

Do you have a link to this? I'd like to read it.

~~~
pjmorris
Closest thing I could find was a report [0] on a 2007 panel discussion with
Brooks at OOPSLA, titled “No Silver Bullet” Reloaded – A Retrospective on
“Essence and Accidents of Software Engineering"

Here's [1] a link to his 1995 update, an excerpt from an anniversary edition
of 'The Mythical Man Month'

[0]
[https://dl.acm.org/doi/pdf/10.1145/1297846.1297973](https://dl.acm.org/doi/pdf/10.1145/1297846.1297973)

[1] [http://worrydream.com/refs/Brooks-
NoSilverBullet.pdf](http://worrydream.com/refs/Brooks-NoSilverBullet.pdf)

------
kerkeslager
> Ostensibly in the tradition of Aristotle, Brooks distinguishes between
> essential and accidental complexity. This distinction is central to his
> argument, so it's worth discussing for a minute.

> Software development problems are complex, i.e. made up of many interacting
> sub-problems. Some of that complexity is accidental. This doesn't imply
> randomness or sloppiness, but only that the complexity isn't inherent to the
> problem; that it's only the result of our (human) failure to achieve
> perfection.

> If you imagine that you could whittle away all the accidental complexity,
> you'd ultimately reach a point where, in the words of Saint Exupéry, there
> is nothing more to remove. What's left is the essential complexity.

Okay, that might be a useful way of thinking of things...

But the author then goes on to talk about how he thinks that Fred Brooks
underestimated the percentage of accidental complexity in the average project.

However, he then starts to go into things he thinks are "silver bullets", but
most of them are tools that, in my opinion, address _essential complexity_ :

* The World Wide Web: in his description, this solves a problem of the complexity of finding how to do things. Can we imagine a solution where finding out how to do things isn't part of the system? No? Under the definition, that's essential complexity.

* Git: in his description this solves the problem of tracking changing source. Can we imagine programming where we don't need to keep track of what source changes happened? I can, maybe, in a distant future where programming is done an entirely different way, but I'd argue that at that point it's not really the same problem. If we're writing code to solve a problem, then tracking changes to that source is essential complexity within that solution.

* Garbage collection: again, maybe there will be a computer that works completely differently (quantum computers?) but under current architecture, the physical machinery of a computer has limited space, so we need to reallocate memory that is no longer being used. That's not accidental complexity, that's essential complexity. There are other solutions to this, but every program solves it in some way, whether it's by reference counting, manual allocation, borrow checking, preallocating everything, or just allocating everything on the stack and copying everything, and each of these has complexity: that complexity is essential.

I think what this all points to is that _essential complexity isn 't
intractable: you can make it someone else's problem._ In these cases, we're
taking some part of the essential complexity of solving a problem, and letting
our ISP and websites solve it, or letting code that we don't maintain solve
it. Some of _the most powerful tools are ones that solve a form of complexity
that is essential to a wide variety of problems._

