Brooks dared to predict that thirty years later, software would still be created by programmers sitting in front of editors, painstakingly typing up code branches for all of the scenarios a given program is supposed to handle. And their output would still mostly suck, because of the infinity of possible states, mostly due to variables and synchronicity. Complexity that cannot be abstracted away.
Given how starry-eyed we usually are about even the near future, that is bold.
The web, language advances, tooling - these are quality-of-life improvements. They don't 'solve' the inherent complexity of software development, the way it was promised by CASE tools (Brooks' likely target with his essay) and countless other 'business oriented' approaches. In fact those approaches have failed so many times that they have been largely abandoned, so in recent times, Brooks' essay might seem superfluous.
The one advance that might finally challenge the 'no silver bullet' rule is machine learning. Not yet, given that it is still an esoteric tool for a specialized class of problems, as part of traditional software systems. But with increasing computing power, I can imagine a future where machine learning can be set to work on broader tasks and start to look like magic self-directed software development.
I'd stick with the silver bullet idea. I don't see machine learning going anywhere, and what passes as machine learning today is just a buzzword.
And others remained impossible still, e.g. cold fusion.
Cold fusion not withstanding, most aren't prohibited by physics - in fact humans already are an existence proof that bio-machines can already create software autonomously with no change in physics, so seems like a question of when, not if.
Definitely not. Machine learning is up there with 2012's nosql movement as one of the most overhyped silver bullets.
The improvements to programming over the last 30 years have been incremental more than anything in spite of the hype cycles of various silver bullets (OOP, functional programming, side effect free code, TDD, nosql, static types, etc.)
Machine learning can't even predict a simple binary proportion ratio. No, it's not a silver bullet.
Machine learning is very effective in categorizing things - for example, given an image estimate whether it pictures a cancer cell or not.
On the other hand, machine learning sucks where you need to predict things instead of categorizing - for example, given a set of factors, figure out your risk of getting cancer within 5 years.
Any such machine learning predictive system will be useless due to under- or overfitting.
(Again, I'm not an expert, just a guy who has many years of experience trying to build such predictive systems.)
Did I miss this? The thesis says it's about what will happen "within a decade" of 1986 -- "as we look to the horizon of a decade hence". He makes no projections beyond that, that I see, and the timeline seems specifically chosen to suggest that further large improvements are still possible but will take time.
Improving the essential complexity has not moved much at all, which is why my computer science education from 24 years ago is still pretty much the same one people get now and hasn't lost its relevance. The removal of the accidental complexity sometimes means it's only the essential complexity left shining through.
In that sense, while he may have had the humility to time-bound his predictions to a mere decade, they've still stayed fresh much longer than that.
(Reading the entire Mythical Man Month in 201x is a bizarre experience; all the specific tech references are so dated that they are more myth than fact now, yet, the underlying points those references are in support of are still quite fresh. Arguably a bit incomplete now, we have teams of a scale well beyond what they had then, sitting on piles of software yet larger, but still correct at the core.)
1. Build vs. Buy - I would say that open source has generally become the "COTS" solution that Brooks was looking for.
2. Requirements refinement and prototyping - agile development.
3. Incremental development - also agile, as well as different evolutionary architecture approaches
4. Great designers - the establishment of technical career ladders at many companies.
Again, none of these things are silver bullets, but they have contributed to significant progress in our industry.
I love this. So rare nowadays to scientists make bold predictions.
Disclosure: I work on Tree Notation (https://treenotation.org/).
Two years ago, in 2017 I predicted Tree Notation will be a silver bullet: by 2027 we will have a 10x improvement in reliability, productivity, and simplicity, thanks to the Tree Notation ecosystem.
2 years later, and thousands of experiments and conversations later, I'm almost positive that will happen.
Every lambda calculus, including yours, corresponds to a Cartesian closed category. Your system is not at all as new as you think; what you are actually inventing are new ways to write down parsers, akin to the many other ways that folks are trying to write down programs in JSON or XML or YAML.
The only advantage that you've got at this point is a 2D addressing scheme, you might think, but E had that in the late 90s: http://www.erights.org/elang/kernel/LiteralExpr.html
Second time I heard this suggestion (https://news.ycombinator.com/item?id=20504193), thanks! Done. http://longbets.org/793/ - “In 2027, at least 7 out of the top 10 TIOBE languages will be Tree Languages AND/OR 0 out of the top 10 languages on 2017's list will be in the Top 10 in 2027. ”
> Your system is not at all as new as you think
I agree. It's mildly new. Tiny tiny little details that make a big difference in practice. To get here I had to stand on the shoulders of giants: built the largest DB of computer languages in the world (over 10k languages with over 1k columns).
> but E had that in the late 90s
Great reference, thanks. E and Kernel-E are great, but the devil is in the details. There are a tremendous amount of tiny little details in Tree Notation and Tree Language that compound to make this something new. You can't take anything out of Tree Notation without losing something important. Everything else you can add via a Tree Language. That makes this closer to binary, imo. There are at least a dozen things that could be taken out of Kernel-E, which itself is a subset of E.
Interesting question though. Can you elaborate?
How is this different than an Abstract Syntax Tree (AST)?
Each word has a location in 2D space. There is a physical manifestation of your program’s source code.
The parsed version (with the words of your source replaced with their parsed types), has the same shape (physical manifestation) as your source code.
So source == AST.
Currently traditional languages this is not true. They are stripped to 1D and the further you move from lisp the more different they become.
How do you deal with large complex code? Lots of folding?
Large complex code hasn’t turned out to be too difficult yet. The key is well designed tree languages for the task at hand. Also, sublime text helps :). Have some code bases in the 6 digits loc. It’s all about tooling though. I don’t really use folding yet but perhaps that would be another feature that can make things easier, good suggestion. Things like type checking, syntax highlighting, auto complete, are essential.
In XML you would need to insert tags defining the whitespace so that it would be seen by programs operating on the AST.
Are your “two different structures” any different from the same representation in XML?
I think it’s a fair analogy to say tree Notation is just a much simpler XML.
At least 2x as simple:
By 2020, we expect Flow to be a competitive rival to Python or R for 80% of data science tasks.
That's incredibly ambitious. Checking in -- are you within 4 months of reaching that goal?
The gist of it is that although Brooks was correct when he wrote No Silver Bullet, since then there's been an enormous increase in accidental complexity and, if that is recognized and removed, an order of magnitude improvement is now possible.
Dark is about removing all accidental complexity from coding, starting with backends. That's pretty vague, but that's honestly how it started: this shit is too hard, what can we do to remove all this shitty stuff that we do.
Setting up infra is one of the three areas that we think is really shit, and that's a large part of Dark. You don't have to deal with infra at all (including DBs, queues, etc); it's all built-in.
Speed of deployment doesn't bother some people, esp those with smaller projects; on big teams it's a massive problem. People got excited about deployless in dark because it seemed like it had a credible vision for how it could be reduced.
We been talking about why we built cause we haven't finished building it yet. We're announcing our move to "private beta" (from "private alpha") on Sep 16th, so that's the kind of stage we're at. We'll be showing how Dark works then.
I feel like that's absolutely how software is. As we have better frameworks and libraries and everything the job doesn't get easier. But what happens is that it's now possible to do something with a team of three that might have taken thirty before.
Just look at Python/django or Ruby/Rails vs doing all that yourself with C and cgi-bin. How big would the C project be before it started to become difficult to work on by virtue of its many lines of code?
I'm not saying these new frameworks solve any kind of difficult theoretical problems by the way. And it's still work to make a django or Rails site. But there are many startups that got going in the last decade with just a few people that during the dotcom bubble might have taken 30, 50, even 100 folks to try and turn into reality.
Something that I have not seen taught anywhere, no in schools, not at work, not on a thousand online courses is "HOW TO PROGRAM" This is why things have gotten hard for many people, they don't know how to program. But for those that do know how to program, things have gotten actually much more easier.
But how does this explain programming over the past 30+ years? You can't blame code camps as to why programming in the 90's has not gotten a magnitude easier, or even harder as you posit.
And what exactly do you mean "how to program"? you throw a lot of shade at the people, institutions and resources involved but don't shine any light on this missing piece...
Today, frameworks, libraries, API, platforms, configuration nightmare make things harder for those that don't know how to program because they become distracted. In my opinion, knowing how to program is about knowing exactly where to focus on, where to begin so you can get to the core of the problem. For those that know how or have the intuition, they can move much faster.
When I talk about focus, I don't mean concentration, but knowing where to direct your attention. Think of it like triage. But it's even harder. When you have a system, let's say it has 100 components, and it's miss behaving. If you know and understand the system, you can know how to narrow down what the root cause might be. On the other hand, if you have been giving the go ahead to build a system, you might know if you need 10 or 100 components. If you even figure it out, which one do you build face? What order do you build them in? Those matter. This is what I mean by programming. The solution prescribed today is that of design docs, build a prototype, build an MVP, Those I believe help a bit, but not that much.
It has. Things that were hard problems then are simple tasks today.
OTOH, if you want to get paid a princely salary, you have to do the things that are still nontrivial. The stuff that has been trivialized has stopped being valuable, the things that used to be intractable but have been reduced to merely challenging have taken their place as worth paying for.
Hordes are now flocking to this industry for the $$$, but have little-to-no interest in the craftsmanship/inner workings aspect of it.
I spent hours the other day helping one of our new devs solve an issue he couldn't solve himself, and after explaining why his code was broken several times in several ways, I got the impression he didn't really care about the backstory I was providing on stack vs. heap memory; he just wanted me to help unblock him as quickly as possible so he could continue to make progress on his task.
- you learn to read code
- you learn to modify code a little
- you learn to copy/paste stuff together
- you write code using libraries and frameworks
- you create something using fundamental principles
- you write code
- you think in code and can create fluently
Where I think we've progressed much further is in the way of cloud/managed services and lightweight devops.
Easy app development still seems close like AirTable but never quite here.
I think its important to remember that the statement is rather specific:
"no single development" and "which by itself" -- but there could be many developments, which, together, provide the order-of-magnitude improvement.
"order-of-magnitude improvement" -- but there could be smaller improvements.
"within a decade" -- the improvements may take longer.
I'm pointing this out because incremental improvements can and do occur, there's just no "silver bullet" to fast track the process.
He says very clearly that this is likely at the remaining of the text.
> the improvements may take longer.
I imagine the point of posting this on HN nowadays is because in 3 decades there wasn't any. Still, something could appear tomorrow, but I imagine it's very unlikely, for the same reasons that are on the article.
"Of the candidates enumerated in “NSB”, object-oriented programming has made the biggest change, and it is [unlike almost every other proposed solution] a real attack on the inherent complexity itself."
And then goes on to say that the most promising approach remains reuse, particularly of COTS programs.
Indeed, although I think more beneficial than COTS has been the rise of open source (which could be seen as a variant of COTS). From open source languages with rich standard libraries to application frameworks to whole applications (databases, operating systems, etc).
The benefit from open source hasn't just been the lower TCO of 3rd party software, but also changes in the development process. Perhaps the single biggest process change has been distributed version control. Not only has Git significantly reduced the accidental complexity of collaborating on software projects, it has also become the primary mechanism for distributing open source code.
Can it really be that that's the only previous HN discussion of this classic?
Here is one https://news.ycombinator.com/item?id=10306335
It tries to build a theory in an Aristotelian manner, i.e. not based on careful observation but mostly on rationalization (and maybe very partial, biased observations). The problem with rationalizations is that they can often be made to support any claim when the empirical picture isn't clear. An additional problem in this particular case is that when No Silver Bullet was published, the same kind of people (PL enthusiasts) made roughly the same arguments, but their predictions proved wrong, whereas Brooks's proved right. It's not the end of the story, but it does mean that their theory needs, at the very least, to be revised.
The tarpit paper is also not a prediction or forecast the way that the No Silver Bullet paper is. It reparameterizes and expands upon essential and accidental tasks and proposes a development framework to minimize accidental tasks.
The company I currently work for uses a code generation framework inspired by the ideas from the tarpit paper. It's very successful in simplifying and speeding up the development process.
It is an empirical claim, one that at least has not been refuted by observation.
> Otherwise they'd be incidental.
I don't understand this. It's an empirical claim about software in the wild.
> It reparameterizes and expands upon essential and accidental tasks and proposes a development framework to minimize accidental tasks.
... and yet, no one has found a silver bullet yet (as per Brooks's definition), nor anything close to it.
There are no sources or pieces of evidence cited in the section on what the essential tasks are. If it's an empirical claim, then the any claim made in the tarpit paper is certainly equally empirical.
> ... and yet, no one has found a silver bullet yet (as per Brooks's definition), nor anything close to it.
There was no such claim.
Then what does it have to do with No Silver Bullet? Brooks's point isn't that you can't make languages that some people may find more attractive, but that you can't drastically reduce complexity.
> There are no sources or pieces of evidence cited in the section on what the essential tasks are.
Right, that's why it's a claim. But it comes with an empirical prediction that was later verified by observation.
> If it's an empirical claim, then the any claim made in the tarpit paper is certainly equally empirical.
Of course it is, but if we take it as a silver-bullet claim (i.e. the ability to drastically cut down complexity), then it just doesn't fit with observation.
Not to mention what observations are you talking about? Because I'm not too aware of any tarpit inspired languages/development frameworks, let alone an actual FRP framework.
The paper's response to Brooks's central assumption, which leads to his prediction is, and I quote the full sentence, "We disagree."
It is an interesting opinion piece but it is entirely "Aristotelian".
> Because I'm not too aware of any tarpit inspired languages/development frameworks, let alone an actual FRP framework.
Different languages and frameworks have adopted different parts, usually the more practical ones. None proved to be a silver bullet.
>Following Brooks we distinguish accidental from essential diculty, but disagree with his premise that most complexity remaining in contemporary systems is essential
There's no silver bullet claim.
> Brooks asserts ... that the majority... of the complexity that we find in contemporary large systems is of the essential type. We disagree.
That means a silver bullet is possible (against Brooks's claim), and then they go on to describe what they think is a particular silver bullet.
- DeMarco & Lister Peopleware
- 2007. Software engineering: Barry Boehm's lifetime contributions to software development, management and research. Ed. by Richard Selby.
- Hoffman, Daniel M.; Weiss David M. (Eds.): Software Fundamentals – Collected Papers by David L. Parnas, 2001, Addison-Wesley, ISBN 0-201-70369-6.
- And his: The Design of Design. Start with Part II.
He states that the accidental complexity comes from mapping the conceptual construct to a real implementation.
But does that mean he's saying we already know about the essential complexity of a given problem? That doesn't seem the case to me? The way we express the conceptual construct can also be improved, no?
Addressing essential complexity is discussed in the last part of the article ;x
Let's say it's true. Why should we care? Why set the bar at 10x, downplaying smaller (1.5x, 2x, 3x, etc) improvements?
Person 1: Hey, check out this tool that makes creating small web sites 3 times faster.
Person 2: Hah only 3 times faster? Who cares?
But 1.1x to 3x? There is a huge lot of those. People do say "Only 3 times faster? Why care?" all the time, and go focus on some other development.
You're not really addressing the question, just talking about an easy hypothetical where no one could disagree.
What decision would one make differently if sold on No Silver Bullet (NSB)? I can't think of one. Would someone not woke on NSB prefer a 3x improvement over a 10x improvement, for example? Of course not.
If silver bullets were available, developers and researchers should let conservative changes aside and focus on studying high risk ideas.
Indeed, quite a few PL enthusiasts called Brooks's predictions overly pessimistic based on similar non-arguments back in the '80s (he lists some of them in his followup, No Silver Bullet, Refired), but reality proved his predictions to be overly optimistic. So he was right and they were wrong. The arguments in Yes Silver Bullet had already been put to the test failed. Is that reality a definitive proof that there is little accidental complexity left? Of course not, but it does raise the bar for those who claim there's a lot of it left, certainly well beyond simply asserting that that is the case or considering it a reasonable working hypothesis.
Fred Brooks' central claim was that there would be no "order of magnitude" improvement by removing additional accidental complexity, and that real improvement would come from tackling essential complexity.
Whether Mark Seeman's empirical claim is true depends somewhat on whether we think the improvements were on the accidental or essential complexity, and whether any of them individually were an "order of magnitude" improvement.
What is accidental vs. essential complexity? According to Brooks, the essential complexity is formulating "complex conceptual structures" that make up the design of the software. In other words, figuring out what the software is supposed to do and how to do it.
Accidental complexity is concerned with "the mapping of [the design] onto machine languages within space and speed constraints". It seems to me that, according to Brooks, the biggest barriers for software development prior to 1986 was that programming languages were pretty horrible for expressing designs and that computers were too slow and limited to be able to effectively execute software (without seriously butchering the design).
His argument seems to be that, as of 1986, most languages were sufficiently high-level and computers (and PL implementations) sufficiently capable that most programmers could easily express their designs and expect them to be able to run efficiently enough. There might still be progress to be made there, but that it wouldn't be a massive improvement. Instead, most of the improvement would come from getting better at designing those "complex conceptual structures".
So, let's take a look at Seemann's list of improvements:
* WWW/Stack Overflow
* Automated Testing
* Garbage Collection
* Agile (the idea of iterative design/development rather than any process in particular)
Which of these tackle essential vs. accidental complexity (as Brooks defines the terms)? Agile development and automated testing would I think belong in the essential category (as Brooks himself refers to iterative design and tight feedback loops as methods for dealing with essential complexity). Garbage collection, on the other hand, seems like it dealt with accidental complexity.
I don't know about the others, but I would definitely say that garbage collection (especially the fast, low-latency GC we have now) does provide an order of magnitude improvement by reducing accidental complexity. Obviously, GC was invented before 1986, but didn't become widely adopted until (more than) 10 years later.
Will there be other such improvements? Maybe. Statically typed FP doesn't seem like a great candidate to me, at least not by itself. I do however think that Peter Van Roy's formulation of pure FP + deterministic concurrency + message passing + global transactional state may qualify. That's not one single advance, but it does represent (IMO) a order of magnitude improvement on the imperative + "threads and locks" programming everyone was doing 20 years ago.
Still I think Brooks was right in the general sense, being that we should put more emphasis on producing better designs.
Yes, the tech landscape of today is vastly different from 30 years ago.
But to me the magical silver bullet is really an observation of recurring bullshit - a competent programmer would never realize the gains claimed.
Isn't the whole point of a 'silver bullet' that you only need one?