
No Silver Bullet (1986) [pdf] - tosh
http://worrydream.com/refs/Brooks-NoSilverBullet.pdf
======
aedron
The essay is remarkably prescient.

Brooks dared to predict that thirty years later, software would still be
created by programmers sitting in front of editors, painstakingly typing up
code branches for all of the scenarios a given program is supposed to handle.
And their output would still mostly suck, because of the infinity of possible
states, mostly due to variables and synchronicity. Complexity that cannot be
abstracted away.

Given how starry-eyed we usually are about even the near future, that is bold.

The web, language advances, tooling - these are quality-of-life improvements.
They don't 'solve' the inherent complexity of software development, the way it
was promised by CASE tools (Brooks' likely target with his essay) and
countless other 'business oriented' approaches. In fact those approaches have
failed so many times that they have been largely abandoned, so in recent
times, Brooks' essay might seem superfluous.

The one advance that might finally challenge the 'no silver bullet' rule is
machine learning. Not yet, given that it is still an esoteric tool for a
specialized class of problems, as part of traditional software systems. But
with increasing computing power, I can imagine a future where machine learning
can be set to work on broader tasks and start to look like magic self-directed
software development.

~~~
breck
> Brooks dared to predict

I love this. So rare nowadays to scientists make bold predictions.

Disclosure: I work on Tree Notation
([https://treenotation.org/](https://treenotation.org/)).

Two years ago, in 2017 I predicted Tree Notation will be a silver bullet: by
2027 we will have a 10x improvement in reliability, productivity, and
simplicity, thanks to the Tree Notation ecosystem.

2 years later, and thousands of experiments and conversations later, I'm
almost positive that will happen.

~~~
lidHanteyk
You should make a Long Bet for it.

Every lambda calculus, including yours, corresponds to a Cartesian closed
category. Your system is not at all as new as you think; what you are actually
inventing are new ways to write down parsers, akin to the many other ways that
folks are trying to write down programs in JSON or XML or YAML.

The only advantage that you've got at this point is a 2D addressing scheme,
you might think, but E had that in the late 90s:
[http://www.erights.org/elang/kernel/LiteralExpr.html](http://www.erights.org/elang/kernel/LiteralExpr.html)

~~~
breck
> You should make a Long Bet for it.

Second time I heard this suggestion
([https://news.ycombinator.com/item?id=20504193](https://news.ycombinator.com/item?id=20504193)),
thanks! Done. [http://longbets.org/793/](http://longbets.org/793/) \- “In
2027, at least 7 out of the top 10 TIOBE languages will be Tree Languages
AND/OR 0 out of the top 10 languages on 2017's list will be in the Top 10 in
2027. ”

> Your system is not at all as new as you think

I agree. It's mildly new. Tiny tiny little details that make a big difference
in practice. To get here I had to stand on the shoulders of giants: built the
largest DB of computer languages in the world (over 10k languages with over 1k
columns).

> but E had that in the late 90s

Great reference, thanks. E and Kernel-E are great, but the devil is in the
details. There are a tremendous amount of tiny little details in Tree Notation
and Tree Language that compound to make this something new. You can't take
anything out of Tree Notation without losing something important. Everything
else you can add via a Tree Language. That makes this closer to binary, imo.
There are at least a dozen things that could be taken out of Kernel-E, which
itself is a subset of E.

~~~
heavenlyblue
What exactly makes Tree Language any different from the context-sensitive
encoding of code in binary?

~~~
breck
Abstraction. Scope. (also, usability?)

Interesting question though. Can you elaborate?

------
marktangotango
I was at a Big Dumb Corp in 2005 or so on a client dedicated team. Our group
manager decided to have everyone on the team take turns leading the weekly
staff meeting. Mostly because he was lazy and the meeting was a particular
waste of time. When it was my turn I printed off copies of this essay for
everyone, handed them out, spoke a few sentences suggesting everyone read it.
Then ended the meeting. That did not go over well at all with my manager.

~~~
lliamander
How did it go over with your coworkers?

------
DonaldFisk
I wrote a short essay on this four years ago, here:
[http://www.fmjlang.co.uk/blog/NoSilverBullet.html](http://www.fmjlang.co.uk/blog/NoSilverBullet.html)

The gist of it is that although Brooks was correct when he wrote No Silver
Bullet, since then there's been an enormous increase in accidental complexity
and, if that is recognized and removed, an order of magnitude improvement is
now possible.

~~~
pbiggar
That's also the pitch for Dark ([https://darklang.com](https://darklang.com))!
Great minds think alike

~~~
dkersten
I still can't figure out what Dark is actually about. You seem to focus more
on how fast it is to deploy than on what it is, how it works, how you use it,
what it looks like or how its different from previous attempts (eg Eve). I'm a
bit confused by this since deployment really has never been a particularly
large pain point for me (setting up the actual infrastructure has always been
a much bigger pain that actually deploying, plus typically I set deployment up
once with CI and then don't have to think about it again), certainly not
compared to actually creating complex software in the first place. You talk
about why you built Dark, but I can't find anything about what it is you
actually built. Am I missing an important blog post or page or something?

~~~
pbiggar
We've been positioning around the deploy a bit recently cause people got
really excited by it, but that's not really what Dark is about.

Dark is about removing all accidental complexity from coding, starting with
backends. That's pretty vague, but that's honestly how it started: this shit
is too hard, what can we do to remove all this shitty stuff that we do.

Setting up infra is one of the three areas that we think is really shit, and
that's a large part of Dark. You don't have to deal with infra at all
(including DBs, queues, etc); it's all built-in.

Speed of deployment doesn't bother some people, esp those with smaller
projects; on big teams it's a massive problem. People got excited about
deployless in dark because it seemed like it had a credible vision for how it
could be reduced.

We been talking about why we built cause we haven't finished building it yet.
We're announcing our move to "private beta" (from "private alpha") on Sep
16th, so that's the kind of stage we're at. We'll be showing how Dark works
then.

------
msandford
This kind of thing reminds me of a phrase from cycling: It never gets easier,
you just go faster.

I feel like that's absolutely how software is. As we have better frameworks
and libraries and everything the job doesn't get easier. But what happens is
that it's now possible to do something with a team of three that might have
taken thirty before.

Just look at Python/django or Ruby/Rails vs doing all that yourself with C and
cgi-bin. How big would the C project be before it started to become difficult
to work on by virtue of its many lines of code?

I'm not saying these new frameworks solve any kind of difficult theoretical
problems by the way. And it's still work to make a django or Rails site. But
there are many startups that got going in the last decade with just a few
people that during the dotcom bubble might have taken 30, 50, even 100 folks
to try and turn into reality.

~~~
segmondy
It does get easier too. Better frameworks and libraries have made the job so
much easier! The challenge I see is that people don't know how to code. They
know the keywords of the language, they know programming constructs, design
patterns, algorithms, frameworks, but they don't know how to program. You can
know all the ingredients, you might even know the recipe but that doesn't make
you a good chef or mean you know how to cook.

Something that I have not seen taught anywhere, no in schools, not at work,
not on a thousand online courses is "HOW TO PROGRAM" This is why things have
gotten hard for many people, they don't know how to program. But for those
that do know how to program, things have gotten actually much more easier.

~~~
ticmasta
>> This is why things have gotten hard for many people, they don't know how to
program

But how does this explain programming over the past 30+ years? You can't blame
code camps as to why programming in the 90's has not gotten a magnitude
easier, or even harder as you posit.

And what exactly do you mean "how to program"? you throw a lot of shade at the
people, institutions and resources involved but don't shine any light on this
missing piece...

~~~
dragonwriter
> You can't blame code camps as to why programming in the 90's has not gotten
> a magnitude easier

It has. Things that were hard problems then are simple tasks today.

OTOH, if you want to get paid a princely salary, you have to do the things
that are still nontrivial. The stuff that has been trivialized has stopped
being valuable, the things that used to be intractable but have been reduced
to merely challenging have taken their place as worth paying for.

------
dkersten
> "There is no single development, in either technology or management
> technique, which by itself promises even one order-of-magnitude improvement
> within a decade in productivity, in reliability, in simplicity."

I think its important to remember that the statement is rather specific:

"no single development" and "which by itself" \-- but there could be many
developments, which, together, provide the order-of-magnitude improvement.

"order-of-magnitude improvement" \-- but there could be smaller improvements.

"within a decade" \-- the improvements may take longer.

I'm pointing this out because incremental improvements can and do occur,
there's just no _" silver bullet"_ to fast track the process.

~~~
marcosdumay
> but there could be many developments, which, together, provide the order-of-
> magnitude improvement.

He says very clearly that this is likely at the remaining of the text.

> the improvements may take longer.

I imagine the point of posting this on HN nowadays is because in 3 decades
there wasn't any. Still, something could appear tomorrow, but I imagine it's
very unlikely, for the same reasons that are on the article.

------
mpweiher
In _No Silver Bullet Reloaded_ [1], a 20 year retrospective, Brooks said:

"Of the candidates enumerated in “NSB”, object-oriented programming has made
the biggest change, and it is [unlike almost every other proposed solution] a
real attack on the inherent complexity itself."

And then goes on to say that the most promising approach remains reuse,
particularly of COTS programs.

[1]
[https://www.researchgate.net/publication/221321794_No_silver...](https://www.researchgate.net/publication/221321794_No_silver_bullet_reloaded_retrospective_on_essence_and_accidents_of_software_engineering)

~~~
lliamander
> And then goes on to say that the most promising approach remains reuse,
> particularly of COTS programs.

Indeed, although I think more beneficial than COTS has been the rise of open
source (which could be seen as a variant of COTS). From open source languages
with rich standard libraries to application frameworks to whole applications
(databases, operating systems, etc).

The benefit from open source hasn't just been the lower TCO of 3rd party
software, but also changes in the development process. Perhaps the single
biggest process change has been distributed version control. Not only has Git
significantly reduced the accidental complexity of collaborating on software
projects, it has also become the primary mechanism for distributing open
source code.

------
carapace
Er, I learned Prolog last summer and realized that about half of my
professional career was wasted because I didn't learn it sooner. Dunno if
Prolog counts as a Silver Bullet but those folks have slain a lot of
werewolves.

------
dang
Thread from 2015:
[https://news.ycombinator.com/item?id=10306335](https://news.ycombinator.com/item?id=10306335)

Can it really be that that's the only previous HN discussion of this classic?

~~~
wglb
There are certainly not too many.

Here is one
[https://news.ycombinator.com/item?id=10306335](https://news.ycombinator.com/item?id=10306335)

------
goto11
The elephant in the room: We have no reliable way of measuring and comparing
developer productivity.

------
yanowitz
A great essay. I suggest Out of the Tarpit as a later exploration (20 years
after No Silver Bullet) with fantastic and challenging analysis. If reading
the whole paper is too daunting, checkout the summary (and subscribe to his
regular email summaries of interesting papers) at:
[https://blog.acolyer.org/2015/03/20/out-of-the-tar-
pit/](https://blog.acolyer.org/2015/03/20/out-of-the-tar-pit/)

~~~
pron
The Tarpit paper, if taken as a response to Brooks, suffers from a similar
problem to the one I discussed here:
[https://news.ycombinator.com/item?id=20829129](https://news.ycombinator.com/item?id=20829129)

It tries to build a theory in an Aristotelian manner, i.e. not based on
careful observation but mostly on rationalization (and maybe very partial,
biased observations). The problem with rationalizations is that they can often
be made to support any claim when the empirical picture isn't clear. An
additional problem in _this_ particular case is that when _No Silver Bullet_
was published, the same kind of people (PL enthusiasts) made roughly the same
arguments, but their predictions proved wrong, whereas Brooks's proved right.
It's not the end of the story, but it does mean that their theory needs, at
the very least, to be revised.

~~~
jcranberry
Brooks suppositions on _essential_ tasks are not empirical at all. Otherwise
they'd be incidental. And that's what the tarpit paper focuses on.

The tarpit paper is also not a prediction or forecast the way that the No
Silver Bullet paper is. It reparameterizes and expands upon essential and
accidental tasks and proposes a development framework to minimize accidental
tasks.

The company I currently work for uses a code generation framework inspired by
the ideas from the tarpit paper. It's very successful in simplifying and
speeding up the development process.

~~~
pron
> Brooks suppositions on essential tasks are not empirical at all.

It is an empirical claim, one that at least has not been refuted by
observation.

> Otherwise they'd be incidental.

I don't understand this. It's an empirical claim about software in the wild.

> It reparameterizes and expands upon essential and accidental tasks and
> proposes a development framework to minimize accidental tasks.

... and yet, no one has found a silver bullet yet (as per Brooks's
definition), nor anything close to it.

~~~
jcranberry
> It is an empirical claim, one that at least has not been refuted by
> observation.

There are no sources or pieces of evidence cited in the section on what the
essential tasks are. If it's an empirical claim, then the any claim made in
the tarpit paper is certainly equally empirical.

> ... and yet, no one has found a silver bullet yet (as per Brooks's
> definition), nor anything close to it.

There was no such claim.

~~~
pron
> There was no such claim.

Then what does it have to do with No Silver Bullet? Brooks's point isn't that
you can't make languages that some people may find more attractive, but that
you can't drastically reduce complexity.

> There are no sources or pieces of evidence cited in the section on what the
> essential tasks are.

Right, that's why it's a claim. But it comes with an empirical prediction that
was later verified by observation.

> If it's an empirical claim, then the any claim made in the tarpit paper is
> certainly equally empirical.

Of course it is, but if we take it as a silver-bullet claim (i.e. the ability
to drastically cut down complexity), then it just doesn't fit with
observation.

~~~
jcranberry
I'm beginning to get skeptical whether you've actually read the paper. _It is
not making a silver bullet claim or denying the forecast originally present in
No Silver Bullet._ If so, it would be asserting some way to significantly
reduce or completely remove essential complexity. It instead attempts to find
a minimal definition of essential complexity, and proposes a method to
mitigate the cost of non-essential complexity. _It is supplementary to No
Silver Bullet, not a refutation of it._

Not to mention what observations are you talking about? Because I'm not too
aware of any tarpit inspired languages/development frameworks, let alone an
actual FRP framework.

~~~
pron
> It is not making a silver bullet claim or denying the forecast originally
> present in No Silver Bullet.

The paper's response to Brooks's central assumption, which leads to his
prediction is, and I quote the full sentence, "We disagree."

It is an interesting opinion piece but it is entirely "Aristotelian".

> Because I'm not too aware of any tarpit inspired languages/development
> frameworks, let alone an actual FRP framework.

Different languages and frameworks have adopted different parts, usually the
more practical ones. None proved to be a silver bullet.

~~~
jcranberry
The actual quote.

>Following Brooks we distinguish accidental from essential diculty, but
disagree with his premise that most complexity remaining in contemporary
systems is essential

There's no silver bullet claim.

~~~
pron
It very much is a silver bullet claim (following Brooks's definition). But
here's the quote I was referring to, in context:

> Brooks asserts ... that the majority... of the complexity that we find in
> contemporary large systems is of the essential type. We disagree.

That means a silver bullet is possible (against Brooks's claim), and then they
go on to describe what they think is a particular silver bullet.

------
breck
If you liked this, years ago Fred Brooks recommended these books to me:

\- DeMarco & Lister Peopleware

\- 2007. Software engineering: Barry Boehm's lifetime contributions to
software development, management and research. Ed. by Richard Selby.

\- Hoffman, Daniel M.; Weiss David M. (Eds.): Software Fundamentals –
Collected Papers by David L. Parnas, 2001, Addison-Wesley, ISBN 0-201-70369-6.

\- And his: The Design of Design. Start with Part II.

------
Ididntdothis
Is there a non PDF version of this or is there a way to read PDFs on an iPhone
SE without magnifying glass?

~~~
gwern
No abstract but how about
[https://www.cgl.ucsf.edu/Outreach/pc204/NoSilverBullet.html](https://www.cgl.ucsf.edu/Outreach/pc204/NoSilverBullet.html)
?

~~~
Ididntdothis
Yeah! Thank you.

------
azeirah
If the conceptual construct is made of concepts, and the power of high-level
languages comes from being able to write software in concepts similar to the
construct's concepts...

He states that the accidental complexity comes from mapping the conceptual
construct to a real implementation.

But does that mean he's saying we already _know_ about the essential
complexity of a given problem? That doesn't seem the case to me? The way we
express the conceptual construct can also be improved, no?

~~~
azeirah
Oh, I hadn't finished reading the article when I posted this comment.

Addressing essential complexity is discussed in the last part of the article
;x

------
i_s
> There is no single development, in either technology or management
> technique, which by itself promises even one order-of-magnitude improvement
> within a decade in productivity, in reliability, in simplicity.

Let's say it's true. Why should we care? Why set the bar at 10x, downplaying
smaller (1.5x, 2x, 3x, etc) improvements?

Person 1: Hey, check out this tool that makes creating small web sites 3 times
faster.

Person 2: Hah only 3 times faster? Who cares?

~~~
marcosdumay
Because if there were a few techniques that improved our efficiency 10 times
either, we should make it our top priority to find and adopt those, because a
couple of those are already a gamechanger.

But 1.1x to 3x? There is a huge lot of those. People do say "Only 3 times
faster? Why care?" all the time, and go focus on some other development.

~~~
i_s
> Because if there were a few techniques that improved our efficiency 10 times
> either, we should make it our top priority to find and adopt those, because
> a couple of those are already a gamechanger.

You're not really addressing the question, just talking about an easy
hypothetical where no one could disagree.

What decision would one make differently if sold on No Silver Bullet (NSB)? I
can't think of one. Would someone not woke on NSB prefer a 3x improvement over
a 10x improvement, for example? Of course not.

~~~
marcosdumay
> What decision would one make differently if sold on No Silver Bullet (NSB)?

If silver bullets were available, developers and researchers should let
conservative changes aside and focus on studying high risk ideas.

------
darekkay
The "Silver Bullet Syndrome" video by Hadi Hariri [1] is also worth a watch.

[1]
[https://www.youtube.com/watch?v=3wyd6J3yjcs](https://www.youtube.com/watch?v=3wyd6J3yjcs)

------
ryanmarsh
I love this paper and have used it repeatedly with clients over the years. I
especially love the explanation of essential vs. inessential complexity
(difficulties). A proto thesis on yak shaving, if you will.

------
icrbow
Yes Silver Bullet
[https://news.ycombinator.com/item?id=20324523](https://news.ycombinator.com/item?id=20324523)

~~~
pron
The funny thing is that that post doesn't address any of the theoretical
arguments made by Brooks. While certainly less rigorous than physics, his
argument is analogous to a discussion of the speed-of-light limit, and that
"Yes Silver Bullet's" argument is analogous to, "but what if we use a
different kind of fuel in our rockets?" and then claiming that that fuel is
the silver bullet with neither empirical nor theoretical evidence to support
that claim.

Indeed, quite a few PL enthusiasts called Brooks's predictions overly
pessimistic based on similar non-arguments back in the '80s (he lists some of
them in his followup, _No Silver Bullet, Refired_ ), but reality proved his
predictions to be overly _optimistic_. So he was right and they were wrong.
The arguments in Yes Silver Bullet had already been put to the test failed. Is
that reality a _definitive_ proof that there is little accidental complexity
left? Of course not, but it does raise the bar for those who claim there's a
lot of it left, certainly well beyond simply _asserting_ that that is the case
or considering it a reasonable working hypothesis.

~~~
lliamander
So the author does conjecture that statically typed FP would represent a major
improvement on accidental complexity. I agree that this is unlikely (though I
haven't done enough of that myself to be sure) but most of the article seemed
to be making an empirical claim about productivity improvements that have
already been achieved.

Fred Brooks' central claim was that there would be no "order of magnitude"
improvement by removing additional accidental complexity, and that real
improvement would come from tackling essential complexity.

Whether Mark Seeman's empirical claim is true depends somewhat on whether we
think the improvements were on the accidental or essential complexity, and
whether any of them individually were an "order of magnitude" improvement.

What is accidental vs. essential complexity? According to Brooks, the
essential complexity is formulating "complex conceptual structures" that make
up the design of the software. In other words, figuring out what the software
is supposed to do and how to do it.

Accidental complexity is concerned with "the mapping of [the design] onto
machine languages within space and speed constraints". It seems to me that,
according to Brooks, the biggest barriers for software development prior to
1986 was that programming languages were pretty horrible for expressing
designs and that computers were too slow and limited to be able to effectively
execute software (without seriously butchering the design).

His argument seems to be that, as of 1986, most languages were sufficiently
high-level and computers (and PL implementations) sufficiently capable that
most programmers could easily express their designs and expect them to be able
to run efficiently enough. There might still be progress to be made there, but
that it wouldn't be a massive improvement. Instead, most of the improvement
would come from getting better at designing those "complex conceptual
structures".

So, let's take a look at Seemann's list of improvements:

* WWW/Stack Overflow

* Automated Testing

* Git

* Garbage Collection

* Agile (the idea of iterative design/development rather than any process in particular)

Which of these tackle essential vs. accidental complexity (as Brooks defines
the terms)? Agile development and automated testing would I think belong in
the essential category (as Brooks himself refers to iterative design and tight
feedback loops as methods for dealing with essential complexity). Garbage
collection, on the other hand, seems like it dealt with accidental complexity.

I don't know about the others, but I would definitely say that garbage
collection (especially the fast, low-latency GC we have now) does provide an
order of magnitude improvement by reducing _accidental_ complexity. Obviously,
GC was invented before 1986, but didn't become widely adopted until (more
than) 10 years later.

Will there be other such improvements? Maybe. Statically typed FP doesn't seem
like a great candidate to me, at least not by itself. I do however think that
Peter Van Roy's formulation of pure FP + deterministic concurrency + message
passing + global transactional state may qualify. That's not one single
advance, but it does represent (IMO) a order of magnitude improvement on the
imperative + "threads and locks" programming everyone was doing 20 years ago.

Still I think Brooks was right in the general sense, being that we should put
more emphasis on producing better designs.

