This article was originally written with a critical tone of C & Unix, it said they have a lot of problems, but with a 50% functionality, a lot of people had accepted it as good enough, then spreaded it like "the ultimate computer viruses". After it's publication, many Unix people reinterpreted it as a celebration of Unix's successful apporach since then.
I always strongly dislike JavaScript for its inconsistencies and ugly syntax, I found the tendency that everything now comes with a JavaScript engine and force developing in JavaScript regardless of their native programming language is unfortunately, and I always dreamed that the alternative timeline which Scheme became the language of the web.
But I've changed my position in recent years, because of this article. Now I think the same principle applies to JavaScript. It was something with a 50% functionality, and so it became the ultimate virus on the Web, and since then, a huge amount of manpower was spent on improving JavaScript. As a result, it still has the same inconsistencies from its original design, but overall, due to all the efforts, the overall usability is actually higher than alternative systems with a "cleaner" / "better" design. Just like a Unix-like OS like FreeBSD or Linux is still one of the most usable systems in existence. So I think I'd just accept JavaScript...
Look on book page 219 (224 in my pdf viewer), and read around the section that starts with "It is far better to have an underfeatured product that is rock solid ..."
It's probably not a "reinterpretation of the Unix people". More like, the author did too good of a job by not taking sides explicitly, so everyone just interpreted it the way they liked. If anything, the author argued why "worse is better" is really better. To take the "Worse" approach just means to "not take all the unnecesssary effort" which will result in a product that exists.
In other words, it's an essay about how big bang approaches don't work out.
I recently discovered my favorite summary of "Worse is Better". It's by the author, but it isn't anywhere in articles by that name.
“It is far better to have an under-featured product that is rock solid, fast, and small than one that covers what an expert would consider the complete requirements.”
Hm... when I think "worse is better" I'm not thinking of software that is "rock solid, fast, and small". Have I been misunderstanding the essay?
I thought most of these applications start out small but they're nowhere near rock solid and haven't been optimised for speed. It just gets the essential features in the hands of the people who need it and works just well enough to be useful.
Well, at least 70% of the HN crowd has been misunderstanding it.
The article has an anecdote in it where the "better" guy has grandiose ideas about wanting to make a perfect system that always does the right thing, even at a huge increase in implementation complexity.
In that specific case, not having to restart system calls after an interrupt. I've always thought the "worse" guy made the right choice by shifting the complexity out of the core, because we should deal with each problem in the place where it's most natural. There's nothing wrong with requiring users to make and use (or only use) a wrapper that deals with that complexity in one specific way that is right in the given situation.
I was surprised as well! But this is from the author so one can't argue with it.
As I think about it more, it makes a lot of sense. The only people willing to put up with something flaky are programmers. For most people, it can do little but it has to be reliable.
Yes. Simplicity of implementation is given priority over all other factors. This does not, to my mind, mean "rock solid" software (even if care is taken for "observable aspects" of correctness) unless we have very different definitions for the term.
It could be fast by virtue of having a simple implementation but that isn't a given. Sometimes getting speed is complicated, especially on modern processors.
I think the central point here is "completeness must be sacrificed whenever implementation simplicity is jeopardized".
Indeed, sometimes omitting a feature which really ought to be there makes life easier for the end user, not just the implementor, even though the user has a good reason to want the feature.
For example, Subversion allows versioning empty directories while Git (like CVS) doesn't.
On the face of it this is just a deficiency in git, but the fallout from Subversion doing the Right Thing is quite extensive: because Subversion treats directories as first-class objects rather than just part of a file's name, you can get a Subversion repo into all kinds of strange and confusing states.
With git the user is never going to be confused by the result of something like "I deleted the directory then tried to merge a branch which added a file inside it".
yes, this article's philosophical summary could more accurately, if less memorably, be written "strategies that prioritize short term objectives tend to outcompete those that prioritize long term objectives"
The flaws of incremental refinement should be obvious to anybody who's worked in a codebase more than 5 years old built on that approach. Maybe it could work in theory, but in practice, the iterations on crap produce more crap.
As far as a specific example, I remember a discussion with Uncle Bob where he specifically mentions banking and accounting as systems that shouldn't use that approach because you'll build the wrong thing.
And yet the conventional wisdom is that "a complicated system that works is almost always found to have been derived from a simple system that works".
But you're right, evolving code often turns it into a mess. The only way that doesn't is if, at each stage, the people working on it keep the architecture and code clean. That takes discipline, not just by the programmers, but also by management - they have to give the programmers time to do the cleanup that is needed, not just time to shoehorn something in.
If you build a 1-story house, then gradually do what-it-takes to build another floor on top of it, you'll either never have a tall building, or end up with something like a pyramid or huge angular support pylons (i.e. extremely wide base).
Incremental refinement is like implementing a local search algorithm.
You may end up getting stuck on a local maximum in the software design space.
To optimize and really find the global maximum often requires backtracking, which in the case of software development might mean throwing out large portions of the codebase and starting over.
The title itself lets you know the desire of the author to occupy some philosophical high ground, while admitting some hard truths.
They could have asked more honestly, "why is C/Unix winning hearts and minds while Lisp-based systems are not" but first they wanted to provide the given that C/Unix was clearly inferior to Lisp based systems. That was not up for debate.
So that sets the bounds for the discussion that follows, and frames the discussion as "why are people choosing the clearly inferior over the clearly superior?"
You could write a similar essay from the point of view of what you might call the "original intent" of C/Unix, which is that simplicity is chronically undervalued and everybody, everywhere, all the time, try to add "just one more feature" to make it better.
That essay has already been written, Rob Pike's UNIX Style, or cat -v Considered Harmful [1]. It inspired a web site and project devoted to simplifying Unix [2].
> C is therefore a language for which it is easy to write a decent compiler, and it requires the programmer to write text that is easy for the compiler to interpret.
Ironically this is no longer true. Not so much because C has changed but because the hardware underneath it has.
I would instead state that the standard for what constitutes a "decent compiler" has risen. For any given performance target, I'd still bet that it takes less work to hit it with a C compiler than with a compiler for most newer languages. (Partly because the success of C has resulted in C-related hardware design constraints...)
Worse isn't better, simple is better. Simple is much easier to adapt to complex use cases because it can be easily understood at a high level. Simple avoids bikeshedding, which Lisp developers (the intended audience of this article) are notorious for.
I remember reading this, when it was first published, I think maybe in AI Magazine.
Being a huge Common Lisp fan at the time, I immediately adopted the idea that Correctness is the most important single thing above all else. I don't care what's in the box. The external interfaces on the box should be correct.
This seems like a basic thing we take for granted in tools, libraries, languages and other things we use. Dishwashers. Thermostats.
There's also the hiding-in-plain sight explanation that C was just easier than Lisp for English-speaking people to learn and use because, like English, it's SVO instead of VSO. Then object-oriented languages overtook C for the same reason.
It's like literary fiction vs popular fiction. One might think literary fiction is the right way to do fiction, but popular fiction is where the money is.
But define "better". Literary fiction is "better" in the sense of "communicating profound ideas better". Popular fiction is "better" in the sense of "being something that people want to read".
I always strongly dislike JavaScript for its inconsistencies and ugly syntax, I found the tendency that everything now comes with a JavaScript engine and force developing in JavaScript regardless of their native programming language is unfortunately, and I always dreamed that the alternative timeline which Scheme became the language of the web.
But I've changed my position in recent years, because of this article. Now I think the same principle applies to JavaScript. It was something with a 50% functionality, and so it became the ultimate virus on the Web, and since then, a huge amount of manpower was spent on improving JavaScript. As a result, it still has the same inconsistencies from its original design, but overall, due to all the efforts, the overall usability is actually higher than alternative systems with a "cleaner" / "better" design. Just like a Unix-like OS like FreeBSD or Linux is still one of the most usable systems in existence. So I think I'd just accept JavaScript...