Brooks dared to predict that thirty years later, software would still be created by programmers sitting in front of editors, painstakingly typing up code branches for all of the scenarios a given program is supposed to handle. And their output would still mostly suck, because of the infinity of possible states, mostly due to variables and synchronicity. Complexity that cannot be abstracted away.
Given how starry-eyed we usually are about even the near future, that is bold.
The web, language advances, tooling - these are quality-of-life improvements. They don't 'solve' the inherent complexity of software development, the way it was promised by CASE tools (Brooks' likely target with his essay) and countless other 'business oriented' approaches. In fact those approaches have failed so many times that they have been largely abandoned, so in recent times, Brooks' essay might seem superfluous.
The one advance that might finally challenge the 'no silver bullet' rule is machine learning. Not yet, given that it is still an esoteric tool for a specialized class of problems, as part of traditional software systems. But with increasing computing power, I can imagine a future where machine learning can be set to work on broader tasks and start to look like magic self-directed software development.
>The one advance that might finally challenge the 'no silver bullet' rule is machine learning. Not yet, given that it is still an esoteric tool for a specialized class of problems, as part of traditional software systems. But with increasing computing power, I can imagine a future where machine learning can be set to work on broader tasks and start to look like magic self-directed software development.
I'd stick with the silver bullet idea. I don't see machine learning going anywhere, and what passes as machine learning today is just a buzzword.
It never works until it does. Many major inventions seemed impossible, but they just took a long time. Software is so young and growing so fast, it's hard to say what can happen.
Cold fusion not withstanding, most aren't prohibited by physics - in fact humans already are an existence proof that bio-machines can already create software autonomously with no change in physics, so seems like a question of when, not if.
I like this article, but I think actually the software industry has seen a succession of silver bullets. The problem is, once they appear, you take them for granted:
>The one advance that might finally challenge the 'no silver bullet' rule is machine learning.
Definitely not. Machine learning is up there with 2012's nosql movement as one of the most overhyped silver bullets.
The improvements to programming over the last 30 years have been incremental more than anything in spite of the hype cycles of various silver bullets (OOP, functional programming, side effect free code, TDD, nosql, static types, etc.)
I'm not smart enough to go into the technical details, so I'll use an example instead.
Machine learning is very effective in categorizing things - for example, given an image estimate whether it pictures a cancer cell or not.
On the other hand, machine learning sucks where you need to predict things instead of categorizing - for example, given a set of factors, figure out your risk of getting cancer within 5 years.
Any such machine learning predictive system will be useless due to under- or overfitting.
(Again, I'm not an expert, just a guy who has many years of experience trying to build such predictive systems.)
I think fuzzing, like afl, is an example of the kind of machine learning that might be useful to software engineering. If we could just figure out how to fuzz requirements along with input fields, we might get somewhere.
> Brooks dared to predict that thirty years later, software would still be created by programmers sitting in front of editors, painstakingly typing up code branches for all of the scenarios a given program is supposed to handle.
Did I miss this? The thesis says it's about what will happen "within a decade" of 1986 -- "as we look to the horizon of a decade hence". He makes no projections beyond that, that I see, and the timeline seems specifically chosen to suggest that further large improvements are still possible but will take time.
We've made massive progress on the accidental complexity. That's what a lot of the "bloat" is that people complain about, and why even though I am a bit sympathetic to that, it's only a bit. There's a reason that, for instance, Electron is so popular, and that reason is a lot of Electron apps would not be paragons of efficiency if Electron didn't exist, they would simply not exist at all, or with a greatly reduced feature set or greatly higher expense.
Improving the essential complexity has not moved much at all, which is why my computer science education from 24 years ago is still pretty much the same one people get now and hasn't lost its relevance. The removal of the accidental complexity sometimes means it's only the essential complexity left shining through.
In that sense, while he may have had the humility to time-bound his predictions to a mere decade, they've still stayed fresh much longer than that.
(Reading the entire Mythical Man Month in 201x is a bizarre experience; all the specific tech references are so dated that they are more myth than fact now, yet, the underlying points those references are in support of are still quite fresh. Arguably a bit incomplete now, we have teams of a scale well beyond what they had then, sitting on piles of software yet larger, but still correct at the core.)
Two years ago, in 2017 I predicted Tree Notation will be a silver bullet: by 2027 we will have a 10x improvement in reliability, productivity, and simplicity, thanks to the Tree Notation ecosystem.
2 years later, and thousands of experiments and conversations later, I'm almost positive that will happen.
Every lambda calculus, including yours, corresponds to a Cartesian closed category. Your system is not at all as new as you think; what you are actually inventing are new ways to write down parsers, akin to the many other ways that folks are trying to write down programs in JSON or XML or YAML.
Second time I heard this suggestion (https://news.ycombinator.com/item?id=20504193), thanks! Done. http://longbets.org/793/ - “In 2027, at least 7 out of the top 10 TIOBE languages will be Tree Languages AND/OR 0 out of the top 10 languages on 2017's list will be in the Top 10 in 2027. ”
> Your system is not at all as new as you think
I agree. It's mildly new. Tiny tiny little details that make a big difference in practice. To get here I had to stand on the shoulders of giants: built the largest DB of computer languages in the world (over 10k languages with over 1k columns).
> but E had that in the late 90s
Great reference, thanks. E and Kernel-E are great, but the devil is in the details. There are a tremendous amount of tiny little details in Tree Notation and Tree Language that compound to make this something new. You can't take anything out of Tree Notation without losing something important. Everything else you can add via a Tree Language. That makes this closer to binary, imo. There are at least a dozen things that could be taken out of Kernel-E, which itself is a subset of E.
Interesting. Would you say this is similar to a visual programming language like Max/MSP? I was always intimidated by the large screens of spaghetti (almost literally, because of patch cords).
How do you deal with large complex code? Lots of folding?
Max MSP is definitely an inspiration of mine. Tree Notation is just a format, however. It could be an ideal way for a program like max to store its documents in, as opposed to an xml or a Json. It would allow dual visual / source editing. See our ohayo beta for an example. (note to self: add ohayo demo video)
Large complex code hasn’t turned out to be too difficult yet. The key is well designed tree languages for the task at hand. Also, sublime text helps :). Have some code bases in the 6 digits loc. It’s all about tooling though. I don’t really use folding yet but perhaps that would be another feature that can make things easier, good suggestion. Things like type checking, syntax highlighting, auto complete, are essential.
I was at a Big Dumb Corp in 2005 or so on a client dedicated team. Our group manager decided to have everyone on the team take turns leading the weekly staff meeting. Mostly because he was lazy and the meeting was a particular waste of time. When it was my turn I printed off copies of this essay for everyone, handed them out, spoke a few sentences suggesting everyone read it. Then ended the meeting. That did not go over well at all with my manager.
The gist of it is that although Brooks was correct when he wrote No Silver Bullet, since then there's been an enormous increase in accidental complexity and, if that is recognized and removed, an order of magnitude improvement is now possible.
This is a very good point. The Tree Notation ecosystem I work on (https://treenotation.org), would only have been an incremental improvement back 30 years ago. But now 1) cruft has accumulated and 2) there are new opportunities to exploit Tree Notation thanks to machine learning (for program synthesis, amongst other things) and visual programming, so the benefits at this time could hit that 10x number.
I still can't figure out what Dark is actually about. You seem to focus more on how fast it is to deploy than on what it is, how it works, how you use it, what it looks like or how its different from previous attempts (eg Eve). I'm a bit confused by this since deployment really has never been a particularly large pain point for me (setting up the actual infrastructure has always been a much bigger pain that actually deploying, plus typically I set deployment up once with CI and then don't have to think about it again), certainly not compared to actually creating complex software in the first place. You talk about why you built Dark, but I can't find anything about what it is you actually built. Am I missing an important blog post or page or something?
We've been positioning around the deploy a bit recently cause people got really excited by it, but that's not really what Dark is about.
Dark is about removing all accidental complexity from coding, starting with backends. That's pretty vague, but that's honestly how it started: this shit is too hard, what can we do to remove all this shitty stuff that we do.
Setting up infra is one of the three areas that we think is really shit, and that's a large part of Dark. You don't have to deal with infra at all (including DBs, queues, etc); it's all built-in.
Speed of deployment doesn't bother some people, esp those with smaller projects; on big teams it's a massive problem. People got excited about deployless in dark because it seemed like it had a credible vision for how it could be reduced.
We been talking about why we built cause we haven't finished building it yet. We're announcing our move to "private beta" (from "private alpha") on Sep 16th, so that's the kind of stage we're at. We'll be showing how Dark works then.
This kind of thing reminds me of a phrase from cycling: It never gets easier, you just go faster.
I feel like that's absolutely how software is. As we have better frameworks and libraries and everything the job doesn't get easier. But what happens is that it's now possible to do something with a team of three that might have taken thirty before.
Just look at Python/django or Ruby/Rails vs doing all that yourself with C and cgi-bin. How big would the C project be before it started to become difficult to work on by virtue of its many lines of code?
I'm not saying these new frameworks solve any kind of difficult theoretical problems by the way. And it's still work to make a django or Rails site. But there are many startups that got going in the last decade with just a few people that during the dotcom bubble might have taken 30, 50, even 100 folks to try and turn into reality.
It does get easier too. Better frameworks and libraries have made the job so much easier! The challenge I see is that people don't know how to code. They know the keywords of the language, they know programming constructs, design patterns, algorithms, frameworks, but they don't know how to program. You can know all the ingredients, you might even know the recipe but that doesn't make you a good chef or mean you know how to cook.
Something that I have not seen taught anywhere, no in schools, not at work, not on a thousand online courses is "HOW TO PROGRAM" This is why things have gotten hard for many people, they don't know how to program. But for those that do know how to program, things have gotten actually much more easier.
Well, when people are promised untold riches if they simply "Become a programmer in 2 weeks with our Python bootcamp!" what do you expect?
Hordes are now flocking to this industry for the $$$, but have little-to-no interest in the craftsmanship/inner workings aspect of it.
I spent hours the other day helping one of our new devs solve an issue he couldn't solve himself, and after explaining why his code was broken several times in several ways, I got the impression he didn't really care about the backstory I was providing on stack vs. heap memory; he just wanted me to help unblock him as quickly as possible so he could continue to make progress on his task.
>> This is why things have gotten hard for many people, they don't know how to program
But how does this explain programming over the past 30+ years? You can't blame code camps as to why programming in the 90's has not gotten a magnitude easier, or even harder as you posit.
And what exactly do you mean "how to program"? you throw a lot of shade at the people, institutions and resources involved but don't shine any light on this missing piece...
> You can't blame code camps as to why programming in the 90's has not gotten a magnitude easier
It has. Things that were hard problems then are simple tasks today.
OTOH, if you want to get paid a princely salary, you have to do the things that are still nontrivial. The stuff that has been trivialized has stopped being valuable, the things that used to be intractable but have been reduced to merely challenging have taken their place as worth paying for.
Lack of frameworks made things simpler for folks back in the day, they programmed fine by luck. Lack of frameworks meant less distraction.
Today, frameworks, libraries, API, platforms, configuration nightmare make things harder for those that don't know how to program because they become distracted. In my opinion, knowing how to program is about knowing exactly where to focus on, where to begin so you can get to the core of the problem. For those that know how or have the intuition, they can move much faster.
When I talk about focus, I don't mean concentration, but knowing where to direct your attention. Think of it like triage. But it's even harder. When you have a system, let's say it has 100 components, and it's miss behaving. If you know and understand the system, you can know how to narrow down what the root cause might be. On the other hand, if you have been giving the go ahead to build a system, you might know if you need 10 or 100 components. If you even figure it out, which one do you build face? What order do you build them in? Those matter. This is what I mean by programming. The solution prescribed today is that of design docs, build a prototype, build an MVP, Those I believe help a bit, but not that much.
Using lisp is like having a very high quality 3D printer. You can make anything like other high quality 3D printers but you start with very little else.
Where I think we've progressed much further is in the way of cloud/managed services and lightweight devops.
Easy app development still seems close like AirTable but never quite here.
> "There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity."
I think its important to remember that the statement is rather specific:
"no single development" and "which by itself" -- but there could be many developments, which, together, provide the order-of-magnitude improvement.
"order-of-magnitude improvement" -- but there could be smaller improvements.
"within a decade" -- the improvements may take longer.
I'm pointing this out because incremental improvements can and do occur, there's just no "silver bullet" to fast track the process.
> but there could be many developments, which, together, provide the order-of-magnitude improvement.
He says very clearly that this is likely at the remaining of the text.
> the improvements may take longer.
I imagine the point of posting this on HN nowadays is because in 3 decades there wasn't any. Still, something could appear tomorrow, but I imagine it's very unlikely, for the same reasons that are on the article.
In No Silver Bullet Reloaded [1], a 20 year retrospective, Brooks said:
"Of the candidates enumerated in “NSB”, object-oriented programming has made the biggest change, and it is [unlike almost every other proposed solution] a real attack on the inherent complexity itself."
And then goes on to say that the most promising approach remains reuse, particularly of COTS programs.
> And then goes on to say that the most promising approach remains reuse, particularly of COTS programs.
Indeed, although I think more beneficial than COTS has been the rise of open source (which could be seen as a variant of COTS). From open source languages with rich standard libraries to application frameworks to whole applications (databases, operating systems, etc).
The benefit from open source hasn't just been the lower TCO of 3rd party software, but also changes in the development process. Perhaps the single biggest process change has been distributed version control. Not only has Git significantly reduced the accidental complexity of collaborating on software projects, it has also become the primary mechanism for distributing open source code.
Er, I learned Prolog last summer and realized that about half of my professional career was wasted because I didn't learn it sooner. Dunno if Prolog counts as a Silver Bullet but those folks have slain a lot of werewolves.
A great essay. I suggest Out of the Tarpit as a later exploration (20 years after No Silver Bullet) with fantastic and challenging analysis. If reading the whole paper is too daunting, checkout the summary (and subscribe to his regular email summaries of interesting papers) at: https://blog.acolyer.org/2015/03/20/out-of-the-tar-pit/
It tries to build a theory in an Aristotelian manner, i.e. not based on careful observation but mostly on rationalization (and maybe very partial, biased observations). The problem with rationalizations is that they can often be made to support any claim when the empirical picture isn't clear. An additional problem in this particular case is that when No Silver Bullet was published, the same kind of people (PL enthusiasts) made roughly the same arguments, but their predictions proved wrong, whereas Brooks's proved right. It's not the end of the story, but it does mean that their theory needs, at the very least, to be revised.
Brooks suppositions on essential tasks are not empirical at all. Otherwise they'd be incidental. And that's what the tarpit paper focuses on.
The tarpit paper is also not a prediction or forecast the way that the No Silver Bullet paper is. It reparameterizes and expands upon essential and accidental tasks and proposes a development framework to minimize accidental tasks.
The company I currently work for uses a code generation framework inspired by the ideas from the tarpit paper. It's very successful in simplifying and speeding up the development process.
> It is an empirical claim, one that at least has not been refuted by observation.
There are no sources or pieces of evidence cited in the section on what the essential tasks are. If it's an empirical claim, then the any claim made in the tarpit paper is certainly equally empirical.
> ... and yet, no one has found a silver bullet yet (as per Brooks's definition), nor anything close to it.
Then what does it have to do with No Silver Bullet? Brooks's point isn't that you can't make languages that some people may find more attractive, but that you can't drastically reduce complexity.
> There are no sources or pieces of evidence cited in the section on what the essential tasks are.
Right, that's why it's a claim. But it comes with an empirical prediction that was later verified by observation.
> If it's an empirical claim, then the any claim made in the tarpit paper is certainly equally empirical.
Of course it is, but if we take it as a silver-bullet claim (i.e. the ability to drastically cut down complexity), then it just doesn't fit with observation.
I'm beginning to get skeptical whether you've actually read the paper. It is not making a silver bullet claim or denying the forecast originally present in No Silver Bullet. If so, it would be asserting some way to significantly reduce or completely remove essential complexity. It instead attempts to find a minimal definition of essential complexity, and proposes a method to mitigate the cost of non-essential complexity. It is supplementary to No Silver Bullet, not a refutation of it.
Not to mention what observations are you talking about? Because I'm not too aware of any tarpit inspired languages/development frameworks, let alone an actual FRP framework.
>Following Brooks we distinguish accidental from essential diculty, but disagree with his premise that most complexity remaining in contemporary systems is essential
If the conceptual construct is made of concepts, and the power of high-level languages comes from being able to write software in concepts similar to the construct's concepts...
He states that the accidental complexity comes from mapping the conceptual construct to a real implementation.
But does that mean he's saying we already know about the essential complexity of a given problem? That doesn't seem the case to me? The way we express the conceptual construct can also be improved, no?
> There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.
Let's say it's true. Why should we care? Why set the bar at 10x, downplaying smaller (1.5x, 2x, 3x, etc) improvements?
Person 1: Hey, check out this tool that makes creating small web sites 3 times faster.
Because if there were a few techniques that improved our efficiency 10 times either, we should make it our top priority to find and adopt those, because a couple of those are already a gamechanger.
But 1.1x to 3x? There is a huge lot of those. People do say "Only 3 times faster? Why care?" all the time, and go focus on some other development.
> Because if there were a few techniques that improved our efficiency 10 times either, we should make it our top priority to find and adopt those, because a couple of those are already a gamechanger.
You're not really addressing the question, just talking about an easy hypothetical where no one could disagree.
What decision would one make differently if sold on No Silver Bullet (NSB)? I can't think of one. Would someone not woke on NSB prefer a 3x improvement over a 10x improvement, for example? Of course not.
I love this paper and have used it repeatedly with clients over the years. I especially love the explanation of essential vs. inessential complexity (difficulties). A proto thesis on yak shaving, if you will.
The funny thing is that that post doesn't address any of the theoretical arguments made by Brooks. While certainly less rigorous than physics, his argument is analogous to a discussion of the speed-of-light limit, and that "Yes Silver Bullet's" argument is analogous to, "but what if we use a different kind of fuel in our rockets?" and then claiming that that fuel is the silver bullet with neither empirical nor theoretical evidence to support that claim.
Indeed, quite a few PL enthusiasts called Brooks's predictions overly pessimistic based on similar non-arguments back in the '80s (he lists some of them in his followup, No Silver Bullet, Refired), but reality proved his predictions to be overly optimistic. So he was right and they were wrong. The arguments in Yes Silver Bullet had already been put to the test failed. Is that reality a definitive proof that there is little accidental complexity left? Of course not, but it does raise the bar for those who claim there's a lot of it left, certainly well beyond simply asserting that that is the case or considering it a reasonable working hypothesis.
So the author does conjecture that statically typed FP would represent a major improvement on accidental complexity. I agree that this is unlikely (though I haven't done enough of that myself to be sure) but most of the article seemed to be making an empirical claim about productivity improvements that have already been achieved.
Fred Brooks' central claim was that there would be no "order of magnitude" improvement by removing additional accidental complexity, and that real improvement would come from tackling essential complexity.
Whether Mark Seeman's empirical claim is true depends somewhat on whether we think the improvements were on the accidental or essential complexity, and whether any of them individually were an "order of magnitude" improvement.
What is accidental vs. essential complexity? According to Brooks, the essential complexity is formulating "complex conceptual structures" that make up the design of the software. In other words, figuring out what the software is supposed to do and how to do it.
Accidental complexity is concerned with "the mapping of [the design] onto machine languages within space and speed constraints". It seems to me that, according to Brooks, the biggest barriers for software development prior to 1986 was that programming languages were pretty horrible for expressing designs and that computers were too slow and limited to be able to effectively execute software (without seriously butchering the design).
His argument seems to be that, as of 1986, most languages were sufficiently high-level and computers (and PL implementations) sufficiently capable that most programmers could easily express their designs and expect them to be able to run efficiently enough. There might still be progress to be made there, but that it wouldn't be a massive improvement. Instead, most of the improvement would come from getting better at designing those "complex conceptual structures".
So, let's take a look at Seemann's list of improvements:
* WWW/Stack Overflow
* Automated Testing
* Git
* Garbage Collection
* Agile (the idea of iterative design/development rather than any process in particular)
Which of these tackle essential vs. accidental complexity (as Brooks defines the terms)? Agile development and automated testing would I think belong in the essential category (as Brooks himself refers to iterative design and tight feedback loops as methods for dealing with essential complexity). Garbage collection, on the other hand, seems like it dealt with accidental complexity.
I don't know about the others, but I would definitely say that garbage collection (especially the fast, low-latency GC we have now) does provide an order of magnitude improvement by reducing accidental complexity. Obviously, GC was invented before 1986, but didn't become widely adopted until (more than) 10 years later.
Will there be other such improvements? Maybe. Statically typed FP doesn't seem like a great candidate to me, at least not by itself. I do however think that Peter Van Roy's formulation of pure FP + deterministic concurrency + message passing + global transactional state may qualify. That's not one single advance, but it does represent (IMO) a order of magnitude improvement on the imperative + "threads and locks" programming everyone was doing 20 years ago.
Still I think Brooks was right in the general sense, being that we should put more emphasis on producing better designs.
From that comment I was expecting to find an essay claiming to have the silver bullets because hand-waving. I was not disappointed. It even listed functional programming.
Yes, the tech landscape of today is vastly different from 30 years ago.
But to me the magical silver bullet is really an observation of recurring bullshit - a competent programmer would never realize the gains claimed.
Brooks dared to predict that thirty years later, software would still be created by programmers sitting in front of editors, painstakingly typing up code branches for all of the scenarios a given program is supposed to handle. And their output would still mostly suck, because of the infinity of possible states, mostly due to variables and synchronicity. Complexity that cannot be abstracted away.
Given how starry-eyed we usually are about even the near future, that is bold.
The web, language advances, tooling - these are quality-of-life improvements. They don't 'solve' the inherent complexity of software development, the way it was promised by CASE tools (Brooks' likely target with his essay) and countless other 'business oriented' approaches. In fact those approaches have failed so many times that they have been largely abandoned, so in recent times, Brooks' essay might seem superfluous.
The one advance that might finally challenge the 'no silver bullet' rule is machine learning. Not yet, given that it is still an esoteric tool for a specialized class of problems, as part of traditional software systems. But with increasing computing power, I can imagine a future where machine learning can be set to work on broader tasks and start to look like magic self-directed software development.