I think this criticism is slightly misattributed. Ideologues ask us to accept a false dichotomy. Ideologues exist on both sides of an issue usually. For example, there are most definitely ideologues who believe this same dichotomy exists, but believe that the only option is to choose beauty and elegance, even at the expense of practicality. These are the people who (for example) wish Lisp machines had won and that we lived in a monoculture of only Lisp.
I like C++ because I think it's useful, but I'm not a Worse is Better Ideologue. For example, I like Rust as a potential "better is better" replacement for a lot of C++'s use cases. I like Rust particularly because it seriously addresses nearly all of the practical advantages of C++. Many "Better is Better" ideologues would rather just hand-wave away the negatives of the compromises required to achieve their vision (like mandatory GC).
Rust proves that "better is better" design can address real-world practical challenges, while also providing an escape hatch back into a more primitive "Worse is Better" kind of world (unsafe blocks). It will be interesting to see if this approach gains the traction that I hope it will.
I think"worse is better" vs. "better is better" is fundamentally business-driven: does your product deliver value through compatibility/familiarity or by doing things in a technically better way than the competition? Both have been successful, and choosing the right one is basically a question of business judgment.
Some "better is better" ideologues will make mandatory GC a part of their vision despite of its costs. Rust has taken a different approach and come up with a "better is better" design that does not make this compromise.
I can't parse what exactly this is supposed to refer to. Doctors treat the patients they get - and frequently wind up with no choice but to use treatments that are only barely worse than the disease (see the history of chemotherapy drugs, for instance). Outside of legal academia, the legal profession is always working with imperfect information, imperfect systems, imperfect people making the calls (ffs, I feel a Law and Order episode breaking out)...
To me, the giant disconnect is that we've still got two threads of thought still mixed together under "Computer Science": the actual science-y bit, and the "shovel bits from A to B" software construction part. It's as if materials science, structural engineering, and construction management were all lumped together. Putting a new sidewalk in does not require the development of an entirely new method for making concrete.
Oh come on. You know what he's trying to say, you're just saying he's making an over-generalization and that he's wrong. If you want to say that, say it. Don't pretend like it's so incomprehensible you can't parse the sentence.
First interpretation is the descriptive usage. We could explicitly prefix the meme and qualify it as RGWIB (Richard-Gabriel-Worse-is-Better). The original label was the observation that "simpler" software was more successful than more full-featured software with ambitious goals. Richard's thesis wasn't about "hacks" but about small and simple things that satisfy users and builds momentum.
Second interpretation is the prescriptive (or self-justifying) stance which I might call HOBWIB (Hacks-Ok-Because-Worse-is-Better.) This appears to be what the blogger is complaining about.
However, HOBWIB isn't Richard Gabriel's thesis. That "worse is better" has taken on a life of its own and repurposed by others who are unaware of RG's original meaning is just a circumstance of adopting snappy soundbites. Whatever behavior the author is complaining about would exist whether the exact phrase "worse is better" existed or not.
>“Worse is Better”, in other words, asks us to accept a false dichotomy: either we write software that is ugly and full of hacks, or we are childish idealists who try to create software artifacts of beauty and elegance.
That label does not have that power over us. For example, we have a label for certain human behavior and call it "passive aggressive." The existence of that phrase did not force us to choose whether to be passive aggressive or not. Likewise, thinking that the existence of 3 words "worse is better" is forcing us into a dichotomy of bad vs good design is flawed analysis.
RPG's WIB said nothing about bad design. C didn't do better than lisp because it was a hack. It was an incredibly clean design, as clean as the original lisp. RPG was comparing two clean designs, and trying to describe the evolutionary properties of one that made it more fit in its environment (ie us) than the other. The biggest such property is simplicity of implementation. (Later C compilers have gotten more complex and ++, but a modern compiler wouldn't have been as competitive in an earlier world where C wasn't already adopted: http://www.ribbonfarm.com/2011/09/23/the-milo-criterion)
There have been many ideas in software development floating around in the last couple of decades, from pure functional programming to interactive programming and programming by example. All of these approaches -- and the many others I haven't mentioned -- have explored interesting and possibly promising future directions, but all are yet to demonstrate consistently superior results, across various domains, to the "broken" system we have today.
We are still exploring, and maybe some of the ideas people play with today will serve as the seeds of future, revolutionary development procedures. But it's not like we know today how to do it significantly better even if we wanted to do away with the "old" ways right noe.
I suspect a lot of programmers develop a kind of immune response to talk of revolution, simply because it so often comes from people who haven't understood the problem, are not aware of the history or choose to ignore the realities of compatibility and market momentum.
They don't. They're just selling something. Not to say some approaches aren't better than others, it's just the only way to know which approach is really better is to implement the same problem space in both and compare. Anybody telling you that X is better than Y at Z without personally using both X and Y to do Z is just trying to sell you something. Beware.
It's also why the best test when deciding on a programming language or framework is to look at what others have actually created with it.
To a point. If everyone followed that rule, no one would ever build anything in anything new. At which point, looking at what others have actually created with it would merely be a test of the longevity of the programming language or framework.
I think the opposite of your equation is also true. I can't say that X isn't better than Y at Z without personally using both X and Y to do Z.
The real world exists. Successful solutions usually come from actually understanding the problem and all it's constraints. Solutions that are dismissed as overly academic tend to fall afoul of solving just the immediate problem and ignoring eg compatibility, switching costs, accessibility to beginners. Similarly, the linux community spent a long time burying it's head in the sane w.r.t. usability and appearance. Sure, people shouldn't care how pretty your UI is. But they do and you can't change that by burying your head in the sand.
Everything has a cost. The benefits of your new solution and the switching cost have to outweigh the pain of the old solution by a large margin. The original 'worse is better' was really an observation that simple solutions that do half the job are often cheaper than a hugely complicated solution that can do everything. If you spend every day coming up with better ways to make widgets, it's easy to believe that everyone wants to have as much widget-customising power as possible and is willing to invest time in learning to get that power. If you actually talk to your users you might find that only having three kinds of widget isn't annoying enough to make them spend time learning something better.
Saying "Sure, people shouldn't care how pretty your UI is." mischaracterizes user interface design as being about prettiness rather than interaction. But that might in fact be your very point and you might actually talking about the mischaracterization of others.
I love Lisp. I used it a lot for creating code that elegantly modifies itself.
But my entire software company succeeded because of languages like C/C++ and python, not because of Lisp or any "functional programming language of the week".
C gives you raw power that nobody else does. The fact that we could use c on c++ did become very useful so many times.
Some people believe that forcing other people not to use things like pointers is a good thing, it is "better".
For me it is the same as forcing people not to use sharpen edges on knifes for their own good.
Yes, it is better in the sense that people wont cut themselves with knifes. But it is worse on other areas too.
And if you let people choose, they will choose to continue using their knifes until something genuinely better appears.
This is exactly what people like this man can't stand, people choosing on their own to use something they don't like so they want to force their "better" way(my way or the highway).
If you have something better, well, show us the code, instead of ranting, and you will discover the fact that making anything that people actually want to use is way harder than ranting in a blog.
People cut themselves far worse trying to use a knife that's too dull for their task than a knife that's too sharp. Sharp knives are only safer when you're not using the knife (more likely to be cut by a sharp knife in a drawer than a dull knife in a drawer, &c).
Of course it is. Yet computer science history is full of overly ambitious projects (Plan9, Vista, Lisp) falling by the wayside as inferior solutions which were more incremental continued to chug along. The author didn't stop to consider why those failed, only to blame a myopic community; as if Lisp being better would have proven itself if only more people gave it a chance.
Incremental changes are how we progress. We take the lessons we have learned from what we're doing now, and then try something a little bit different. We can't try and redesign everything at once, so we pick a few things, and other compromises get left in. Compromise is an essential requirement to getting anything big done.
I think he argues that this sort of absolutism is broken.
The author isn't proposing that we discard incremental change, just the uncritical assumption that incremental is the only reasonable change.
See his analogy with portfolio theory: he's not even challenging incrementalism as the default, any more than he'd suggest putting 90% of your money into emerging markets.
Exactly. A few examples.
- C's "the language has no idea how big an array is" problem. Result: decades of buffer overflows, and a whole industry finding them, patching them, and exploiting them.
- Delayed ACKs in TCP. OK idea, but the fixed timer was based on human typing speed, copied from X.25 accumulation timers. Result: elaborate workarounds, network stalls.
- C's "#include" textual approach to definition inclusion. Result: huge, slow builds, hacks like "precompiled headers".
- The UNIX protection model, where the finest-grain entity is the user. Any program can do anything the user can do, and hostile programs do. Result: the virus and anti-virus industries.
- Makefiles. Using dependency relationships was a good idea. Having them be independent of the actual dependencies in the program wasn't. Result: "make depend",
"./configure", and other painful hacks.
- Peripheral-side control of DMA. Before PCs, IBM mainframes had "channels", which effectively had an MMU between device and memory, so that devices couldn't blither all over memory. Channels also provided a uniform interface for all devices. IBM PCs, like many minicomputers, originally had memory and devices on the same bus. This reduced transistor count back when it mattered. But it meant that devices could write all over memory, and devices and drivers had be trusted. Three decades later, when transistor counts don't matter, we still have that basic architecture. Result: drivers still crashing operating systems, many drivers still in kernel, devices able to inject malware into systems.
- Poor interprocess communication in operating systems. What's usually needed is a subroutine call. What the OS usually gives you is an I/O operation. QNX gets this right. IPC was a retrofit in the UNIX/Linux world, and still isn't very good. Fast IPC requires very tight coordination between scheduling and IPC, or each IPC operation puts somebody at the end of the line for CPU time. Result: elaborate, slow IPC systems built on top of sockets, pipes, channels, shared memory, etc. Programs too monolithic.
- Related to this, poor support for "big objects". A "big object" is something which can be called and provides various functions, but has an arms-length relationship with the caller and needs some protection from it. Examples are databases, network stacks, and other services. We don't even have a standard term for this. Special-purpose approaches include talking to the big object over a socket (databases), putting the big object in the kernel (network stacks), and trying to use DLL/shared object mechanisms in the same or partially shared address space. General purpose approaches include CORBA, OLE, Google protocol buffers and, perhaps, REST/JSON. Result: ad-hoc hacks for each shared object.
Now that a unix permission model is the norm, viruses are comparatively gone and replaced by malware that simply tricks the user into installing it. No permission model will help you against this. As a partial result, now we see things like iOS where we remove control from the user, or OS X where we try to make it inconvenient to be duped into giving access.
There are certainly still exploits that don't require duping the user, but the anti-virus industry certainly wasn't established based on these.
That's not really true. No permission model will be 100% effective, but a more fine-grained permission model might lead to more users saying "Um, no, mysteriously executable pornography, I don't want to give you my bank records and the ability to email my friends."
I think being able to explain to the computer how my data is grouped, and access patterns in it, is more natural for users than most of the security models we have today.
It's also much easier to have two copies of the browser load, depending on if I'm invoking it through name.banking or name.general. And much easier to explain to grandma you do banking when you use name.banking and you look at cat photos in general.
Grandma isn't stupid, she doesn't understand how technology work. Making permissions based around how she categorizes her information and how she divvies up tasks is more natural for her than insisting security only work if she understand how computers work.
Thanks to UNIX boxes being the bulk of the always on systems attached to the internet at that time they presented most of the attack surface, and consequently an industry of people to attempt to protect them.
No they aren't. There's tons of them. You don't need to be admin for a virus to be a problem. All the data a user cares about is owned by that user anyways. There's plenty of "haha I encrypted your files, pay me if you want to access them ever again" extortion viruses.
Well, yes. "Worse is better"...than none at all. All of your parent's examples are better than none at all :)
The configure script is hack to solve another "worse": no direct way to get pertinent platform information from the C environment, like what functions are available, how big is some type and so forth.
Try out the Mono compiler for C# some time. It is so fast that you might as well recompile your entire project every time you change one line of code. I'd pay serious money to get that kind of performance from a C++ compiler. Tons of other compilers are really fast. JIT is fast all modern browsers. Go compiles in a snap. Python starts running immediately.
The only other language I use with a compile time comparable to C++ is Haskell.
How can you build anything large, or even large-ish if you don't have compilation units?
Across languages? Even if you stipulate that all languages you care about have the same notion of "subroutine call", how do you portably handle data marshaling?
Marshaling is an important, and neglected, subject in language design. Compilers really should understand marshaling as a compilable operation. In many cases, marshaling can be compiled down to moves and adds. Done interpretively, or through "reflection", there's a huge overhead. If you're doing marshaling, you're probably doing it on a lot of data, so efficiency matters.
For Google protocol buffers, there are pre-compilers which generate efficient C, Go, Java, or Python. That works. They're not integrated into the language, so it's kind of clunky. Perhaps compilers should accept plug-ins for marshaling.
Most other cross-language systems are more interpretive. CORBA and SOAP libraries tend to do a lot of work for each call. This discourages their use for local calls.
Incidentally, there's a fear of the cost of copying in message passing systems. This is overrated. Most modern CPUs copy very fast and in reasonably wide chunks. If you're copying data that was just created and will immediately be used, everything will be in the fastest cache.
Fortunately, we can generally assume today that integers are 32 or 64 bit twos complement, floats are IEEE 754, and strings are Unicode. We don't have to worry about 36-bit machines, Cray, Univac, or Burroughs floats, or EBCDIC. (It's really time to insist that the only web encoding be UTF-8, by the way.) So marshaling need involve little conversion. Endian, at worst, but that's all moves.
I generally agree with what you're saying.
This is one of the things COBOL, of all languages, generally got right: You had a Data Definition language, which has been carried over to SQL, and the compiler could look at the code written in that language to create parsers automatically. Of course, COBOL having been COBOL, this was oriented to 80-column 9-edge-first fixed-format records with all the types an Eisenhower-era Data Processing Professional thought would be important.
The concept might could use some updating, is what I'm saying.
> Most modern CPUs copy very fast and in reasonably wide chunks.
And most modern OSes can finagle bits in the page table to remove the need for copying.
> strings are Unicode
By which you mean UTF-32BE, naturally. ;)
> It's really time to insist that the only web encoding be UTF-8, by the way.
This might actually be doable, if only because of all the smilies that rely on Unicode to work and the fact UTF-8 is the only encoding that handles English efficiently.
That tends to be more trouble than it's worth. It usually means flushing caches, locking lots of things, and having to interrupt every CPU. Unless it's a really big data move (megabytes) it's probably a lose. Mach did that, and it didn't work out well.
And the point is exactly to address data marshalling, which is a hard enough problem that reducing the number of applications that have to independently solve it would be a great benefit.
That most developers don't even tend to get reading from/writing to a socket efficiently right (based on a deeply unscientific set of samples I've seen through my career) implies to me we really shouldn't trust much developers to get data marshalling right.
Worst case? your app falls back on using said interface to exchange blocks of raw bytes if the provided model doesn't work for you.
These days pluggable devices (SATA, USB) don't get DMA access. Only physical cards do (PCIe, etc.) -- again because of performance issues.
Both FireWire and PCIe over cable expose memory via a pluggable interface. In the FireWire case, it's not really DMA; it's a message, but the ability to patch memory is there. FireWire hardware usually offers bounds registers limiting that access. By default, Linux allowed access to the first 4GB of memory (32 bits), even on 64-bit machines. (I once proposed disabling that, but someone was using it for a debugger.)
Firewire, was basically external PCIe (before there was PCIe) and you would be able to do DMA and there was a proof of concept of someone using an early iPod to read/write host memory.
You can't with things like eSATA or USB. There is no DMA capability for the external device to exploit. The host controller (EHCI and alike) are the ones doing the DMAing. You can't write directly to memory with those. Of course USB is exploited by doing things like descriptor buffer overflows.
However he completely misunderstood my point to suggest that worst-is-better is a conscious design quality. In no way did I mean that, and I don't believe Richard Gabriel meant that either. Worse-is-better is about the reason certain solutions win in the market; it's about an evolutionary trait, not a design philosophy.
Look at it this way. If there are 100 possible technical solutions to a given computing problem, the ones that solve the problem more comprehensively are naturally going to tend to be more complex, this complexity comes with an adoption cost, and that cost works against the likelihood of adoption. Furthermore, when you are talking about something that solves the scope of problems which CSS solves, there is no way you can just sit down and design a better system, it will need to go through many iterations to solve all the things that CSS solves (which no single human being is comprehensively aware of btw). In order to see that kind of investment a system needs buy-in from downstream users over long a period of time.
So the strawman that the author sets up is the idea that someone is out there selling a worse solution on purpose because of this meme which is simply an observation of how technical adoption markets play out. Of course such foolish people may be out there, but no intelligent developer sets out to create something that is deliberately worse. Rather, each tradeoff is considered in its specific contemporary context of the imperfect information available. Whether a solution gains traction and wins in the marketplace is has nothing to do with the subjective qualities of "worse" or "better", but rather is a confluence of the state of the market at various points in time: how well it solves immediate problems, how well it works with existing tech, how easy to adopt, and of course some amount of hype loosely related to the aforementioned.
What doesn't play a huge role is how ugly the evolution of this tech is going to look 20 years down the line when the entire landscape has evolved.
It's a little tiresome when a young idealist comes into a 25-year-old field (I'm guessing the author is not much older than that himself), pisses on the hard work of thousands of people pushing web tech forward bit by bit over decades, and says it should be replaced wholesale because it's just rubbish, and then when someone tells them that you're welcome to try but you'll never get any traction in that Sisyphean task, he responds that people aren't engaging in a rational discussion.
> That is, we do not merely calculate in earnest to what extent tradeoffs are necessary or desirable, keeping in mind our goals and values, there is a culture around making such compromises that actively discourages people from even considering more radical, principled approaches.
If you can suspend your frustration for a moment, it seems that you and the author are both arguing for explicit consideration of the tradeoffs faced in the real world. Both sides of the field have a tendency to reduce this consideration to easily evaluated heuristics (eg "useless ivory tower wankery").
I think the authors earlier posts on generous reading apply here. There is plenty here to disagree with but noone gains much from taking sides and fighting to the death. Rather than getting wound up we could be having a much more valuable discussion of how market forces shape technology and how we might find ways to support and apply long shot research.
EDIT From the discussion you mention:
> What might otherwise be an interesting, nuanced discussion of the economics of technology adoption, network effects, switching costs, etc, is instead replaced with sloganeering like 'Worse is Better'.
> There may well be times where fighting for revolutionary change or finding some adoption story for completely new tech is a better path forward than incremental improvements. Questions like this can not be settled at the level of ideology or sloganeering...
CSS has proved an awful system for layout, design and practically every domain it attempts to address over the last couple of decades. Having worked with both for a long time, here's the fundamental difference between HTML and CSS, as I think it's a useful comparison and highlights when worse is truly better, and when it is just worse:
HTML was limited and simple by design, and has gradually improved (the original worse is better ethos)
CSS was broken by design, and hasn't much improved (maybe in a few years with flexbox and grids, maybe).
I don't find it at all surprising that someone thinks CSS could and should be replaced, and I'm skeptical that if the best minds of our generation took another look at it, they couldn't find some far better ways to lay out content than an inconsistent and confusing box model with concepts like floats tacked on to it and lots of module proposals to try to glom on additional layout modes. I can't think of many problems CSS solves well, apart from separation of style/layout and content, even there at present we have div soup for grids instead of table soup - hardly a huge improvement on what went before. Some of the problems it attempts to solve are not even real-world problems and are of its own devise (for example the Cascading Style priorities in its name - what a bizarre focus for a layout language).
Perhaps the answer is a turing-complete layout language (personally I doubt it), or perhaps it's another more informed declarative one, but I'm quite sure we can improve on it, and the comparison to HTML is apposite, because that has stood the test of time rather well compared to its companion technologies.
It's a little tiresome when a young idealist comes into a 25-year-old field
We don't live in the best of all possible worlds, and to make it better, we sometimes have to take a step back from a local maximum and look at the bigger picture; that involves listening to 25-year-olds coming up with something better - most of the time they won't, but sometimes they will. If you find yourself not even listening, and more concerned with sunk costs, work already done, and expertise already gained, that's not healthy and not a convincing riposte.
That doesn't involve listening to them, it involves them putting in the work to develop something as a proof of concept, and that proof of concept evolving through feedback and collaboration into something people want to use.
To put it more crassly, people don't stop using something because others say it's shit, they stop using something when there's a better option on the table, and that better option needs to be more than just architecturally improved, it needs to be genuinely useful, and this is what people who twist "worse is better" fail to understand... legacy code sticks around when it's genuinely useful.
And to address the original author... yes, the idea with writing software is to get the job done, but if you want to work on tooling to improve that process be my guest.
Well I think it's always useful to listen - even if you disagree or think your interlocutor mistaken. Of course as you say ideas are worth a lot less than implementation, but the one point I fully agree with the original author on is this:
CSS is not best of breed, existing for 20 years does not make it good in any sense, and it is a terrible example of worse is better in the original sense. However arguing over "worse is better" is just going to end in arguing about what that vague phrase means, so I won't enter that particular rabbit-hole - I agree the original author misunderstood or has not encountered the original meaning.
In the case of CSS, what holds back adoption of alternatives is almost entirely browser-vendor inertia and the institutional barriers to producing a better solution, not some technical superiority of CSS, so what I object to in the parent comment is the implication that CSS won because it is technically superior to other layout methods and is complex because it deals with lots of complex domain problems which a 25 year old couldn't possibly fathom. It introduces needless complexity, doesn't even properly address the domain problems (design, layout, grids etc), was badly designed from the start and has become even more complex with age, and I'd argue it has succeeded mostly by riding on the coat tails of HTML.
You're as much strawmanning his article as you're accusing him of strawmanning other points.
It's possible lots of people did put a lot of work in to styling on the web, and that it's still bad and we could do better if we started over. Of course it's going to take a lot of effort to get back to where a current project is, but that's the point: people who are using current projects should also be looking at their future costs (total cost of ownership over company/product lifetime), and take on a couple high risk investments if their projected long term cost is lowered by it. They're not doing that, because people like you show up and say "Hey, it'll never get done because it's a lot of work, so let's just keep hacking on a complexity explosion."
Of course a CSS replacement isn't going to immediately replace it. That's nonsense. Rather, the author is saying that if we started now, in 5-10 years, our choice to reduce complexity would have paid off, and we'll now pull ahead of where the hackpile that is CSS would have us at that time.
But if we never start, we're never going to get that growth, and have to keep hacking away at a mess forever.
The opposite is true. The best solutions are simple, not complex. Piling features one on top of other features to solve every problem as you encounter it gets you started faster than first thinking carefully about the problem space and then designing a solution. That's worse is better, or better: worse is quicker.
And by the way, CSS is definitely not complex because of irreducible complexity.
CSS is like a second draft. We can do much better.
I have made this longer than usual because I have not had time to make it shorter. Blaise Pascal
Plus, I had to be done in ten days or something worse than JS would have happened. Brendan Eich
And, the fundamental idea of Worse is Better is so incredibly older than UNIX, even in the field of computer science; John Von Neumann said himself that the Von Neumann machine was a temporary workaround and that a better architecture would quickly replace it.
I live in a near constant fight in the rebel factions of Better and so far, I have to reckon we get constantly massacred by the Empire of Worse.
In my eyes "worse is better" is about the mindset of approaching a task. It is about diving right in and learning through production - without being paralysed by the idea of introducing hacks or ugly design. It is the idea that, for the moment, there isn't a need to be worried about covering every edge case or possible failure option. It persuades you to focus on something simple and easy to explain, with a single purpose or intent. It is better to produce something (anything) and see where it takes you.
It also says how important it is to embrace contribution and collaboration. How important it is to, after some threshold, release yourself from feelings of ownership.
But if I had to nail down exactly why I believe "worse" is so successful ("better") it is because those that create "worse" software don't focus on the software - they focus on the idea. From that the software is painfully drawn. The software might suck, but I believe ideas are better. They are more persistant, easily explored, dynamic, and shareable than software. Ideas that are good, simple, and easily taught are far more important than well designed software. That is why they survive.
There are lots of programmers and hackers who don't believe in "worse is better". Sometimes you seen them on HN with a fantastic new programming language (or something) they have designed and built in isolation - perfect in every aspect (at least to them). Nothing quite hurts like their confusion when interest dwindles and their software is forgotten. All they had seen on HN were "worse" links every day, and after years they had provided "better" - to them it is criminal that it hasn't been picked and gained momentum.
Worse is better is not going away, and I think you can either engage yourself in it as a philosophy, or struggle.
I suspect you've described Rich Hickey to a tee here. /s
"You might think that by the year 2000 I would have settled what I think of worse is better - after over a decade of thinking and speaking about it, through periods of clarity and periods of muck, and through periods of multi-mindedness on the issues. But, at OOPSLA 2000, I was scheduled to be on a panel entitled "Back to the Future: Is Worse (Still) Better?" And in preparation for this panel, the organizer, Martine Devos, asked me to write a position paper, which I did, called "Back to the Future: Is Worse (Still) Better?" In this short paper, I came out against worse is better. But a month or so later, I wrote a second one, called "Back to the Future: Worse (Still) is Better!" which was in favor of it. I still can’t decide. Martine combined the two papers into the single position paper for the panel, and during the panel itself, run as a fishbowl, participants routinely shifted from the pro-worse-is-better side of the table to the anti-side. I sat in the audience, having lost my voice giving my Mob Software talk that morning, during which I said, "risk-taking and a willingness to open one’s eyes to new possibilities and a rejection of worse-is-better make an environment where excellence is possible. Xenia invites the duende, which is battled daily because there is the possibility of failure in an aesthetic rather than merely a technical sense."
Decide for yourselves."
It makes all kinds of sense. Consider for a second the sort-of parable of Chesterton's fence. As G.K. Chesterton wrote:
"In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don’t see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.'"
It's basically that whole Spolsky think about why you shouldn't rewrite code from scratch:
"Back to that two page function. Yes, I know, it's just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I'll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn't have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.
Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it's like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.
When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."
Broadly speaking: you, for ANY definition of you, are unable to design a perfect system at the first go. You will not account for all use cases, there will be edge cases you don't consider, so on and so forth. But it goes further than that: if you want to design something better than C++, you need to understand why so many people use C++. If you want to replace CSS, you need to understand why CSS is popular. If your thinking on the matter hasn't evolved much past "CSS is unnecessary" and blaming a single catchy essay for every decision you disagree with, then instead of writing essays about how everyone else is doing it wrong, maybe you should spend more time trying to understand what everyone is trying to do, why they're trying to do it and what resources they have to do it with. And THEN maybe I'll let you tear that fence down.
No, you don't. You just need to design something better than C++.
"If you want to replace CSS, you need to understand why CSS is popular."
No, you don't. You just need to replace CSS.
Now, having people adopt better-than-C++ and better-than-CSS might take a little bit of the psychoanalysis (marketing?) you propose. But understanding why someone uses C++ is not really relevant at all in order to design a superior replacement.
I assure you, the designers of Java did not really care why people used COBOL for so much banking code, and yet here we are, with Java having supplanted COBOL for much of that work.
Every time someone trots out Chesterton's fence, I trump it with Sturgeon's law.
And how do you define "better than C++"? C++ didn't get to be as popular as it is for nothing. It got to be as popular as it is because it is a very good tool for solving certain classes of problems. If you don't understand why people still choose C++ for new projects, you're going to have a tough time building something that will change those decisions. And if you think people only choose C++ for new projects because they're idiots, frankly, you're the idiot.
"Everybody is an idiot" is a much less likely hypothesis than "I'm missing something". Before you throw out a popular tool whose appeal you don't understand, you would be well advised to spend some time comprehensively falsifying "I'm missing something". Hence, "If you want to replace X, you need to understand why X is popular."
- must someone who designs a car understand why people ride horses?
- must someone who designs a phone understand why people use smoke signals?
- must someone who designs a gun understand why people use swords?
I don't mean marketing, I mean actually designing something better. A lot of people don't actually know what better means. It's like the Betamax myth. A lot of people believe that Betamax was the technologically superior version, and it lost because of other reasons. This is bunk. Betamax's edge in picture quality over VHS, all else being equal, was slim to nonexistant. But Betamax was locked to a relatively fast tape speed to keep quality from degrading. VHS let people run at much slower tape speeds in order to fit more video on the same tape. Betamax allowed you record about an hour worth of video programming on a standard tape. VHS originally allowed two, and then they just said the heck with it and let you record four hours of bad-looking video. It turns out, though, that it's far more important to most people to have the ENTIRETY of a movie or (in the case of a four-hour tape, an NFL game) than it was to have the part of it you recorded look pristine.
The lesson everyone draws from Betamax is that we can't have nice things and technological superiority is trumped by other considerations. The lesson everyone SHOULD draw from Betamax is that to win, you have to be better at the right things to actually be better.
The above makes no sense. Understanding why they use C++ is the first and most basic step in designing a REPLACEMENT.
If what you create doesn't cover their use cases then it's not a replacement, it's just a new language.
Whether your replacement handles the use cases people need the old product for intentionally or unintentionally (in the case of the web vs CICS) doesn't matter.
What matters is that your design DOES handle those use cases and those needs.
You can obviously end up designing something as a replacement for X without specific intent or even knowledge of X.
But you can't design something as a replacement for X if it doesn't handle the needs that people use X for.
As I said, it can also be done unintentionally (you create a new language without studing how C++ is used, and it gets adoption in place of C++), but this is far less likely and quite random.
If one really wants to create a replacement for a language, he should very much study the language he wants to replace, and find what he needs to provide in his new language and what he can improve.
The thing you added that, "no, he can just create a replacement" (without needing to study the previous language), might very well be possible in theory but it's very much improbable.
If you want adoption from C++ users and from projects C++ is used and for the kinds of stuff C++ excels, you pragmatically need to study C++ and how it's used, period. That was the case for Rust, for D, for Java and C# earlier, and of course C++ (who studies C with that intent). That was also not the case for Go, and that's one reason why Go (by Pike's own admission) failed to gain traction from C++ users.
Of course success is not guaranteed, but merely making a "superior language" is not a way to get C++ users off C++ (or any other language).
You are pointing out marketing issues. I agree that if you don't actually have a better C++ than C++, then you will have difficulty convincing most C++ users that they should switch.
By the same token, if you actually do have a better C++ than C++, it will be evident in the adoption of the technology.
Also at this point, if Java, Go, Objective-C, D, and C# have not convinced the current user base of C++ to use other things, then perhaps it is no more possible or necessary to convince these remaining users to switch than it is to convince modern horse riders that they shouldn't bother to ride, own, or breed horses. For them there cannot be a "better C++ than C++."
Unless, you intend for C++'s replacement to have even more features than C++, which hardly seems even possible.
Oh wait, no, is this lack of quality justifiable because this content-free 2 minutes read is more efficient and easier to write than a serious exploration of the consequences of whatever the author was ranting about?
The article ignores LEAKY ABSTRACTION. that's all there is to say. Now lets have some marketers tell us why that's a good thing and we can get back on this irony loop, this irony loop, this irony loop.
Originally, it looked like this:
* MIT philosophy: never compromise on correctness, even in bizarre corner cases. Aim for conceptual beauty to a programmer.
* New Jersey (Bell Labs) philosophy: comprise on correctness for simplicity or performance.
In 1985, six years before "Worse Is Better" was originally written, the "New Jersey" attitude was probably more useful. Most people, if they wanted to write acceptably performant software, had to do it in assembly. C was less of a leap, conceptually and in terms of average-case performance, than Lisp was. People who'd been writing assembly programs for years could learn how to write performant C. Writing performant Lisp would be much harder. A contemporary Common Lisp executable is at least 40 MB (obviously, that wasn't the case in the 1980s); at that time, 1 MB was a powerful machine. "Worse is better" worked in the 1970s and '80s. If every piece of computing work had to be perfect before it could be shipped, we'd be far behind where we are.
Also, quite a number of the original Unix programs were for natural-language processing (at a level that'd be primitive today) and paper formatting. With the resources of the time, it would've been impossible to get much of that stuff perfectly right anyway.
Bell Labs wasn't full of the anti-intellectual idiots who invoke worse-is-better, lean-startup tripe today. They knew what they were doing. They knew the compromises they were working under. They bet on Unix and C rather than Lisp machines, and they were right. In 2014, thanks to our ability to stand on the shoulders of giants that were built using C, we have machines that can efficiently run code in pretty much any language, so the C programmers and the Lispers have won. At least, on that front.
However, the "worse is better" lesson doesn't apply nearly as well in 2014. We can do about 500,000 times as much computation per dollar as we could in 1991. That's 500,000 times (at least!) as many opportunities for things to go wrong. A bug that happens once per 100 billion operations used to be negligible and now it's often not.
Unfortunately, we have an industry beset by mediocrity, in which commodity developers are managed by commodity business executives to work on boring problems, and low software quality is simply tolerated as something we'll always have to deal with. Instead of the knowing compromise of Bell Labs, "worse is better" has evolved into the slipshod fatalism of business people who just assume that software will be buggy, ugly, hard to use, and usually discarded after about 5 years. Yet we're now in a time where, for most problems, we can affordably do them correctly and, because things happen so much faster now, we often put ourselves and our businesses at serious risk if we don't.
What was good enough in the 80s is not good enough in 2014. Our views about good enough changed. For example, security requirements are much higher today.
The question is not if Better or Worse is better. The question is what is good enough?
Its not clear to me that this is the case anymore. The whole startup thesis is that a small, excellent team can outperform vastly larger teams of average people. This is the opposite of how it was in the 80s.
note: I checked that screenshot on another machine and it didn't seem so bad.... so maybe this is also something strange with the monitor / res I'm using... still thought I'd mention it.