Hacker News new | past | comments | ask | show | jobs | submit login
The problematic culture of “Worse is Better” (pchiusano.github.io)
126 points by jamii on Oct 13, 2014 | hide | past | web | favorite | 116 comments

> “Worse is Better”, in other words, asks us to accept a false dichotomy: either we write software that is ugly and full of hacks, or we are childish idealists who try to create software artifacts of beauty and elegance.

I think this criticism is slightly misattributed. Ideologues ask us to accept a false dichotomy. Ideologues exist on both sides of an issue usually. For example, there are most definitely ideologues who believe this same dichotomy exists, but believe that the only option is to choose beauty and elegance, even at the expense of practicality. These are the people who (for example) wish Lisp machines had won and that we lived in a monoculture of only Lisp.

I like C++ because I think it's useful, but I'm not a Worse is Better Ideologue. For example, I like Rust as a potential "better is better" replacement for a lot of C++'s use cases. I like Rust particularly because it seriously addresses nearly all of the practical advantages of C++. Many "Better is Better" ideologues would rather just hand-wave away the negatives of the compromises required to achieve their vision (like mandatory GC).

Rust proves that "better is better" design can address real-world practical challenges, while also providing an escape hatch back into a more primitive "Worse is Better" kind of world (unsafe blocks). It will be interesting to see if this approach gains the traction that I hope it will.

I think "worse is better" vs. "better is better" is best understood as a continuum rather than a black-and-white dichotomy. The popular impression of Rust is indeed better-is-better as you say, but many folks coming from Haskell think of Rust as a worse-is-better language, as it has control flow, impurity, strictness, no higher-kinded types, method notation, object types (as opposed to first-class existentials), curly brackets, etc. The truth is that they're both right: Rust is pragmatic where it needed to be and novel where it needed to be.

I think"worse is better" vs. "better is better" is fundamentally business-driven: does your product deliver value through compatibility/familiarity or by doing things in a technically better way than the competition? Both have been successful, and choosing the right one is basically a question of business judgment.

Great point, just a nit: Rust does not have mandatory GC.

Yes, that's what I was trying to say, sorry it if it was unclear.

Some "better is better" ideologues will make mandatory GC a part of their vision despite of its costs. Rust has taken a different approach and come up with a "better is better" design that does not make this compromise.

It doesn't even have optional GC.

Arguably, it has mandatory manual GC.

"Other professions, like medicine, the law, and engineering, have values and a professional ethic, where certain things are valued for their own sake."

[citation needed]

I can't parse what exactly this is supposed to refer to. Doctors treat the patients they get - and frequently wind up with no choice but to use treatments that are only barely worse than the disease (see the history of chemotherapy drugs, for instance). Outside of legal academia, the legal profession is always working with imperfect information, imperfect systems, imperfect people making the calls (ffs, I feel a Law and Order episode breaking out)...

To me, the giant disconnect is that we've still got two threads of thought still mixed together under "Computer Science": the actual science-y bit, and the "shovel bits from A to B" software construction part. It's as if materials science, structural engineering, and construction management were all lumped together. Putting a new sidewalk in does not require the development of an entirely new method for making concrete.

> I can't parse what exactly this is supposed to refer to.

Oh come on. You know what he's trying to say, you're just saying he's making an over-generalization and that he's wrong. If you want to say that, say it. Don't pretend like it's so incomprehensible you can't parse the sentence.

How long did it take these three different fields to fully separate and have their own concentrations in universities? That may give us a timeline on how long it could be in the computer science, software engineering/architecture (does such a concentration exist involving architecture?), and software development methodology (project management... which is almost no existent in academia) fields.

Computer science used to be quite separate from application development and the nearest it would come to actual software was low level OS componentry. Any conglomeration is more recent.

Software is different from laying concrete. When you lay concrete, you do the same thing over and over again. In software, on the other, we try to automate any recurring task. We are trying to stand on the shoulders of our predecessors, and in turn allow those that come after to reach even higher.

Discussing the meme "worse is better" is difficult because there are different interpretations and usages of that phrase. It's description vs prescription. The blog author chose the prescription interpretation. I can't tell if the author unknowingly did this because he did not see a separate evolution of that meme.

First interpretation is the descriptive usage. We could explicitly prefix the meme and qualify it as RGWIB (Richard-Gabriel-Worse-is-Better). The original label was the observation that "simpler" software was more successful than more full-featured software with ambitious goals. Richard's thesis wasn't about "hacks" but about small and simple things that satisfy users and builds momentum.

Second interpretation is the prescriptive (or self-justifying) stance which I might call HOBWIB (Hacks-Ok-Because-Worse-is-Better.) This appears to be what the blogger is complaining about.

However, HOBWIB isn't Richard Gabriel's thesis. That "worse is better" has taken on a life of its own and repurposed by others who are unaware of RG's original meaning is just a circumstance of adopting snappy soundbites. Whatever behavior the author is complaining about would exist whether the exact phrase "worse is better" existed or not.

>“Worse is Better”, in other words, asks us to accept a false dichotomy: either we write software that is ugly and full of hacks, or we are childish idealists who try to create software artifacts of beauty and elegance.

That label does not have that power over us. For example, we have a label for certain human behavior and call it "passive aggressive." The existence of that phrase did not force us to choose whether to be passive aggressive or not. Likewise, thinking that the existence of 3 words "worse is better" is forcing us into a dichotomy of bad vs good design is flawed analysis.


RPG's WIB said nothing about bad design. C didn't do better than lisp because it was a hack. It was an incredibly clean design, as clean as the original lisp. RPG was comparing two clean designs, and trying to describe the evolutionary properties of one that made it more fit in its environment (ie us) than the other. The biggest such property is simplicity of implementation. (Later C compilers have gotten more complex and ++, but a modern compiler wouldn't have been as competitive in an earlier world where C wasn't already adopted: http://www.ribbonfarm.com/2011/09/23/the-milo-criterion)

It does seem to me that the author has unfortunately encountered a more negative interpretation of the meme. Personally I've read the original RGWIB essay and took away a couple interpretations. First, a simpler implementation is generally better for developers. Second, since developers are tasked with maintaining this code, and getting a mental grasp on it, they're better off imposing their will to a certain extent on users, rather than pile on every feature requested. Now, the second point can be taken to an extreme in the market by just pushing a bunch of hacks out quickly and forcing users to deal with that. But I think that's the essential difference between HOBWIB and RGWIB: the former is about the pure imposition of one's will upon users while RGWIB is about the imposition of will in the name of simplicity. Personally, I think RGWIB is a force for good, but perhaps needs a new meme to distingiush itself.

I can argue with many points made in this article, but there's one assumption at the very core that I think is false: that we know how to do it better.

There have been many ideas in software development floating around in the last couple of decades, from pure functional programming to interactive programming and programming by example. All of these approaches -- and the many others I haven't mentioned -- have explored interesting and possibly promising future directions, but all are yet to demonstrate consistently superior results, across various domains, to the "broken" system we have today.

We are still exploring, and maybe some of the ideas people play with today will serve as the seeds of future, revolutionary development procedures. But it's not like we know today how to do it significantly better even if we wanted to do away with the "old" ways right noe.

I read it more as frustration with the commonly expressed opinion that revolutionary attempts cannot succeed and that only incremental evolution is worth attempting. Not arguing that everything should be revolutionised, just that there is room for attempts and that they should be evaluated on their merits rather than dismissed out of hand.

I suspect a lot of programmers develop a kind of immune response to talk of revolution, simply because it so often comes from people who haven't understood the problem, are not aware of the history or choose to ignore the realities of compatibility and market momentum.

I think that at least part of the problem is this: People find an approach fits them better than all others (of which they are aware). This could be based on their problem space, or personal taste, or both. But they mistake "fits me the best" for "fits programming the best". Then they can't understand why everyone doesn't do it that way. And since they're sure that it's the best way to do all of programming, the only possible explanations for why others don't do it that way is ignorance or incompetence.

Yep, in my experience with my own software, today's "revolutionary, way better designed approach" is tomorrow's "hacky awfulness that needs a revolutionary, way better redesign", ad infinitum. It's much cheaper to accept and incrementally improve an imperfect solution. But I recognize the possibility that this is purely a personal experience and better developers than I really do know "how to do it better".

> better developers than I really do know "how to do it better".

They don't. They're just selling something. Not to say some approaches aren't better than others, it's just the only way to know which approach is really better is to implement the same problem space in both and compare. Anybody telling you that X is better than Y at Z without personally using both X and Y to do Z is just trying to sell you something. Beware.

It's also why the best test when deciding on a programming language or framework is to look at what others have actually created with it.

"It's also why the best test when deciding on a programming language or framework is to look at what others have actually created with it."

To a point. If everyone followed that rule, no one would ever build anything in anything new. At which point, looking at what others have actually created with it would merely be a test of the longevity of the programming language or framework.

I didn't say they do know how to do things better, just that I'm not going to rule out that they might.

I think the opposite of your equation is also true. I can't say that X isn't better than Y at Z without personally using both X and Y to do Z.

Calling people out as #worseisworse is almost certainly going to be ineffective - attacking people just makes them defensive and argumentative. Instead, can we find useful principles to steer us away from the two bogeymen (hack-it-till-it-works vs useless-ivory-tower-wankery)? Some ideas:

The real world exists. Successful solutions usually come from actually understanding the problem and all it's constraints. Solutions that are dismissed as overly academic tend to fall afoul of solving just the immediate problem and ignoring eg compatibility, switching costs, accessibility to beginners. Similarly, the linux community spent a long time burying it's head in the sane w.r.t. usability and appearance. Sure, people shouldn't care how pretty your UI is. But they do and you can't change that by burying your head in the sand.

Everything has a cost. The benefits of your new solution and the switching cost have to outweigh the pain of the old solution by a large margin. The original 'worse is better' was really an observation that simple solutions that do half the job are often cheaper than a hugely complicated solution that can do everything. If you spend every day coming up with better ways to make widgets, it's easy to believe that everyone wants to have as much widget-customising power as possible and is willing to invest time in learning to get that power. If you actually talk to your users you might find that only having three kinds of widget isn't annoying enough to make them spend time learning something better.

I agree with this except possibly one thing.

Saying "Sure, people shouldn't care how pretty your UI is." mischaracterizes user interface design as being about prettiness rather than interaction. But that might in fact be your very point and you might actually talking about the mischaracterization of others.

I was thinking of the fact that beautiful software is perceived as more usable, all else being equal (http://www.ergonomicsclass.com/wp-content/uploads/2011/11/Tr...). This is irrational but not controllable. Unfortunately many good solutions lose out in the market because their developers make the mistake of expecting users to be rational econs, rather than accepting reality and working around it.

What this man calls "worse" actually means "something that in better in ways I fail to understand in my simplistic reduction of the complex world".

I love Lisp. I used it a lot for creating code that elegantly modifies itself.

But my entire software company succeeded because of languages like C/C++ and python, not because of Lisp or any "functional programming language of the week".

C gives you raw power that nobody else does. The fact that we could use c on c++ did become very useful so many times.

Some people believe that forcing other people not to use things like pointers is a good thing, it is "better".

For me it is the same as forcing people not to use sharpen edges on knifes for their own good.

Yes, it is better in the sense that people wont cut themselves with knifes. But it is worse on other areas too.

And if you let people choose, they will choose to continue using their knifes until something genuinely better appears.

This is exactly what people like this man can't stand, people choosing on their own to use something they don't like so they want to force their "better" way(my way or the highway).

If you have something better, well, show us the code, instead of ranting, and you will discover the fact that making anything that people actually want to use is way harder than ranting in a blog.

"For me it is the same as forcing people not to use sharpen edges on knifes for their own good. Yes, it is better in the sense that people wont cut themselves with knifes."

People cut themselves far worse trying to use a knife that's too dull for their task than a knife that's too sharp. Sharp knives are only safer when you're not using the knife (more likely to be cut by a sharp knife in a drawer than a dull knife in a drawer, &c).

The viewpoint the author seems to expound reeks of second system syndrome. That is, he seems to argue that incremental evolution is broken, since the state at any given point is suboptimal given what someone building from scratch would build, having all the lessons of hindsight with them.

Of course it is. Yet computer science history is full of overly ambitious projects (Plan9, Vista, Lisp) falling by the wayside as inferior solutions which were more incremental continued to chug along. The author didn't stop to consider why those failed, only to blame a myopic community; as if Lisp being better would have proven itself if only more people gave it a chance.

Incremental changes are how we progress. We take the lessons we have learned from what we're doing now, and then try something a little bit different. We can't try and redesign everything at once, so we pick a few things, and other compromises get left in. Compromise is an essential requirement to getting anything big done.

he seems to argue that incremental evolution is broken

I think he argues that this sort of absolutism is broken.

The author isn't proposing that we discard incremental change, just the uncritical assumption that incremental is the only reasonable change.

See his analogy with portfolio theory: he's not even challenging incrementalism as the default, any more than he'd suggest putting 90% of your money into emerging markets.

It seems more like a historical observation than an uncritical assumption, that incremental change has shown itself to be more successful. The author's claim seems to be that incremental change has been more successful historically because it has more mindshare. I think that causality is backward.

The one-line summary: "Basically, no one seems to grasp that when stuff that's fundamental is broken, what you get is a combinatorial explosion of bullshit."

Exactly. A few examples.

- C's "the language has no idea how big an array is" problem. Result: decades of buffer overflows, and a whole industry finding them, patching them, and exploiting them.

- Delayed ACKs in TCP. OK idea, but the fixed timer was based on human typing speed, copied from X.25 accumulation timers. Result: elaborate workarounds, network stalls.

- C's "#include" textual approach to definition inclusion. Result: huge, slow builds, hacks like "precompiled headers".

- HTML float/clear as a layout mechanism. Result: Javascript libraries for layout on top of the browser's layout engine. Absolute positioning errors. Text on top of text or offscreen.

- The UNIX protection model, where the finest-grain entity is the user. Any program can do anything the user can do, and hostile programs do. Result: the virus and anti-virus industries.

- Makefiles. Using dependency relationships was a good idea. Having them be independent of the actual dependencies in the program wasn't. Result: "make depend", "./configure", and other painful hacks.

- Peripheral-side control of DMA. Before PCs, IBM mainframes had "channels", which effectively had an MMU between device and memory, so that devices couldn't blither all over memory. Channels also provided a uniform interface for all devices. IBM PCs, like many minicomputers, originally had memory and devices on the same bus. This reduced transistor count back when it mattered. But it meant that devices could write all over memory, and devices and drivers had be trusted. Three decades later, when transistor counts don't matter, we still have that basic architecture. Result: drivers still crashing operating systems, many drivers still in kernel, devices able to inject malware into systems.

- Poor interprocess communication in operating systems. What's usually needed is a subroutine call. What the OS usually gives you is an I/O operation. QNX gets this right. IPC was a retrofit in the UNIX/Linux world, and still isn't very good. Fast IPC requires very tight coordination between scheduling and IPC, or each IPC operation puts somebody at the end of the line for CPU time. Result: elaborate, slow IPC systems built on top of sockets, pipes, channels, shared memory, etc. Programs too monolithic.

- Related to this, poor support for "big objects". A "big object" is something which can be called and provides various functions, but has an arms-length relationship with the caller and needs some protection from it. Examples are databases, network stacks, and other services. We don't even have a standard term for this. Special-purpose approaches include talking to the big object over a socket (databases), putting the big object in the kernel (network stacks), and trying to use DLL/shared object mechanisms in the same or partially shared address space. General purpose approaches include CORBA, OLE, Google protocol buffers and, perhaps, REST/JSON. Result: ad-hoc hacks for each shared object.

Blaming UNIX for the virus industry is a stretch. A per-user protection model is better than none at all.

The anti-virus industry is a result of DOS and Windows not having a unix permission model. Running as the user with highest privileges was the norm.

Now that a unix permission model is the norm, viruses are comparatively gone and replaced by malware that simply tricks the user into installing it. No permission model will help you against this. As a partial result, now we see things like iOS where we remove control from the user, or OS X where we try to make it inconvenient to be duped into giving access.

There are certainly still exploits that don't require duping the user, but the anti-virus industry certainly wasn't established based on these.

"No permission model will help you against this."

That's not really true. No permission model will be 100% effective, but a more fine-grained permission model might lead to more users saying "Um, no, mysteriously executable pornography, I don't want to give you my bank records and the ability to email my friends."

Like how (non-tech) people pay attention to the permissions required by Android and iOS apps they want to install?

Let's imagine that I have privilege grouping sub-users, something like name.banking, name.work, etc. Now my work files can't see my banking unless a window pops up going "Would you like Experimental Thing for Work to have access to name.banking?"

I think being able to explain to the computer how my data is grouped, and access patterns in it, is more natural for users than most of the security models we have today.

It's also much easier to have two copies of the browser load, depending on if I'm invoking it through name.banking or name.general. And much easier to explain to grandma you do banking when you use name.banking and you look at cat photos in general.

Grandma isn't stupid, she doesn't understand how technology work. Making permissions based around how she categorizes her information and how she divvies up tasks is more natural for her than insisting security only work if she understand how computers work.

I said it's not going to reach 100%, probably whatever we do. Probably there are ways to improve on the Android and iOS permission models - there was talk in another thread about "deny this permission but fake it" options, there are probably ways things can be presented better, there might be ways permissions can be divided better, &c... Manifestly, there exist plenty of users that don't pay enough attention to what permissions they're granting. I wouldn't be surprised to learn that it's an improvement over user behavior patterns on user-account-only permission systems, though.

The UNIX world was hardly a model of security until somewhere around 2000. Both HP-UX and Irix of that era could be hilariously insecure, with it being utterly trivial to break through the permissions model.

Thanks to UNIX boxes being the bulk of the always on systems attached to the internet at that time they presented most of the attack surface, and consequently an industry of people to attempt to protect them.

>viruses are comparatively gone

No they aren't. There's tons of them. You don't need to be admin for a virus to be a problem. All the data a user cares about is owned by that user anyways. There's plenty of "haha I encrypted your files, pay me if you want to access them ever again" extortion viruses.

All the instances of these that I've seen rely on social engineering to do their thing though (we had a teacher at my school fall victim to one recently, which is moderately entertaining [when you have up to date backups] when you have a bunch of read/write network shares), as opposed to regular files/executables 'infected with a virus' which is how I generally look at viruses in the traditional sense.

Lots of viruses used social engineering since the start. The only difference now is that once run, it doesn't have admin privileges, so it is harder for it to make itself a permanent fixture on the system.

> "A per-user protection model is better than none at all."

Well, yes. "Worse is better"...than none at all. All of your parent's examples are better than none at all :)

Haha, very nicely put. However, what I was getting at was that DOS/Windows permissions were even worse, and not better for it.

Not for the vast vast majority of users. An OS or programs are by far the easiest things to obtain; modern windows or mac also ship with rescue images. What I'll miss from my drive are the documents I've created, work I've done, pictures, moves, etc. Per-user protection -- such as using root to install -- protects the OS or programs, but helps not at all with anything that's painful if it gets lost.

Being better does mean it's not broken.

being broken does not mean it's the cause of the virus industry.

./configure is not for dependencies, but for platform configuration. Consider that ./configure is used before you do a full build. A full build doesn't need dependencies; an incremental one does.

The configure script is hack to solve another "worse": no direct way to get pertinent platform information from the C environment, like what functions are available, how big is some type and so forth.

You can get the size of a type, the problem is you can't get it at preprocessor time. Which is the result of another problem: the C language operates in two phases: preprocessing and compiling (three, if you count linking). Preprocessing, compiling, and linking phases are walled off from each other, resulting in problems. (Preprocessing shouldn't even be a thing that you do to most source code.)

My understanding is that other languages that don't have this separation of compiling objects and linking them together, can't do incremental builds, and are generally quite slow to build.

C++ is infamously slow to build. I've seen the results of profiling Clang (known as a particularly fast C++ compiler) and the preprocessor takes a big chunk of time. C compiles a fair bit faster.

Try out the Mono compiler for C# some time. It is so fast that you might as well recompile your entire project every time you change one line of code. I'd pay serious money to get that kind of performance from a C++ compiler. Tons of other compilers are really fast. JIT is fast all modern browsers. Go compiles in a snap. Python starts running immediately.

The only other language I use with a compile time comparable to C++ is Haskell.

What are examples of these languages?

How can you build anything large, or even large-ish if you don't have compilation units?

> Poor interprocess communication in operating systems. What's usually needed is a subroutine call.

Across languages? Even if you stipulate that all languages you care about have the same notion of "subroutine call", how do you portably handle data marshaling?

Well, there's Microsoft ".NET".

Marshaling is an important, and neglected, subject in language design. Compilers really should understand marshaling as a compilable operation. In many cases, marshaling can be compiled down to moves and adds. Done interpretively, or through "reflection", there's a huge overhead. If you're doing marshaling, you're probably doing it on a lot of data, so efficiency matters.

For Google protocol buffers, there are pre-compilers which generate efficient C, Go, Java, or Python. That works. They're not integrated into the language, so it's kind of clunky. Perhaps compilers should accept plug-ins for marshaling.

Most other cross-language systems are more interpretive. CORBA and SOAP libraries tend to do a lot of work for each call. This discourages their use for local calls.

Incidentally, there's a fear of the cost of copying in message passing systems. This is overrated. Most modern CPUs copy very fast and in reasonably wide chunks. If you're copying data that was just created and will immediately be used, everything will be in the fastest cache.

Fortunately, we can generally assume today that integers are 32 or 64 bit twos complement, floats are IEEE 754, and strings are Unicode. We don't have to worry about 36-bit machines, Cray, Univac, or Burroughs floats, or EBCDIC. (It's really time to insist that the only web encoding be UTF-8, by the way.) So marshaling need involve little conversion. Endian, at worst, but that's all moves.

> Marshaling is an important, and neglected, subject in language design. Compilers really should understand marshaling as a compilable operation.

I generally agree with what you're saying.

This is one of the things COBOL, of all languages, generally got right: You had a Data Definition language, which has been carried over to SQL, and the compiler could look at the code written in that language to create parsers automatically. Of course, COBOL having been COBOL, this was oriented to 80-column 9-edge-first fixed-format records with all the types an Eisenhower-era Data Processing Professional thought would be important.

The concept might could use some updating, is what I'm saying.

> Most modern CPUs copy very fast and in reasonably wide chunks.

And most modern OSes can finagle bits in the page table to remove the need for copying.

> strings are Unicode

By which you mean UTF-32BE, naturally. ;)

> It's really time to insist that the only web encoding be UTF-8, by the way.

This might actually be doable, if only because of all the smilies that rely on Unicode to work and the fact UTF-8 is the only encoding that handles English efficiently.

And most modern OSes can finagle bits in the page table to remove the need for copying.

That tends to be more trouble than it's worth. It usually means flushing caches, locking lots of things, and having to interrupt every CPU. Unless it's a really big data move (megabytes) it's probably a lose. Mach did that, and it didn't work out well.

Someday the heap will be a protocol.

What's usually needed is a subroutine call does not imply it can only be satisfied with a subroutine call, but that an abstraction that looks and feels like a subroutine call is preferable to one that looks like a stream of bytes.

And the point is exactly to address data marshalling, which is a hard enough problem that reducing the number of applications that have to independently solve it would be a great benefit.

That most developers don't even tend to get reading from/writing to a socket efficiently right (based on a deeply unscientific set of samples I've seen through my career) implies to me we really shouldn't trust much developers to get data marshalling right.

Worst case? your app falls back on using said interface to exchange blocks of raw bytes if the provided model doesn't work for you.

I agree with you regarding all of the above, except for the DMA bit. That has to do with crappy drivers causing crashes. Without direct DMA you'd have horrendous performance issues. It is not a transistor count issue.

These days pluggable devices (SATA, USB) don't get DMA access. Only physical cards do (PCIe, etc.) -- again because of performance issues.

Some machines have had an MMU or equivalent device between peripheral and memory, to provide memory protection. IBM's channels did that. Some early UNIX workstations (Apollo) did that. But it has sort of disappeared.

Both FireWire and PCIe over cable expose memory via a pluggable interface. In the FireWire case, it's not really DMA; it's a message, but the ability to patch memory is there. FireWire hardware usually offers bounds registers limiting that access. By default, Linux allowed access to the first 4GB of memory (32 bits), even on 64-bit machines. (I once proposed disabling that, but someone was using it for a debugger.)

In fact, IOMMUs reappeared in the last few years in the form of VT-d. On systems that support it, DMA attacks should not be possible.

I don't know about IBM channels. PCIe root has the ability to restrict transfer to a certain range and depending on system configuration there are remapping registers that translate between PCIe address and host memory address -- which you can fudge with to remap things how you like.

Firewire, was basically external PCIe (before there was PCIe) and you would be able to do DMA and there was a proof of concept of someone using an early iPod to read/write host memory.

You can't with things like eSATA or USB. There is no DMA capability for the external device to exploit. The host controller (EHCI and alike) are the ones doing the DMAing. You can't write directly to memory with those. Of course USB is exploited by doing things like descriptor buffer overflows.

I believe PCIe DMA is a message as well.

I feel like LINQ is attempting to do something for the "big objects" problem. It's currently really for database access, but I've seen a bit of talk about extending it to other resources, e.g. hetrogenous parallel processors. I could see an interesting future with tools like LINQ providing simple, backend agnostic accessors to computing resources, e.g. network access, database access, file access etc...

Spot on. I couldn't agree more.

I was heavily critical of the author's original rant about CSS, and my citation of worse-is-better probably at least partially inspired him to write this post (http://pchiusano.github.io/2014-07-02/css-is-unnecessary#com...).

However he completely misunderstood my point to suggest that worst-is-better is a conscious design quality. In no way did I mean that, and I don't believe Richard Gabriel meant that either. Worse-is-better is about the reason certain solutions win in the market; it's about an evolutionary trait, not a design philosophy.

Look at it this way. If there are 100 possible technical solutions to a given computing problem, the ones that solve the problem more comprehensively are naturally going to tend to be more complex, this complexity comes with an adoption cost, and that cost works against the likelihood of adoption. Furthermore, when you are talking about something that solves the scope of problems which CSS solves, there is no way you can just sit down and design a better system, it will need to go through many iterations to solve all the things that CSS solves (which no single human being is comprehensively aware of btw). In order to see that kind of investment a system needs buy-in from downstream users over long a period of time.

So the strawman that the author sets up is the idea that someone is out there selling a worse solution on purpose because of this meme which is simply an observation of how technical adoption markets play out. Of course such foolish people may be out there, but no intelligent developer sets out to create something that is deliberately worse. Rather, each tradeoff is considered in its specific contemporary context of the imperfect information available. Whether a solution gains traction and wins in the marketplace is has nothing to do with the subjective qualities of "worse" or "better", but rather is a confluence of the state of the market at various points in time: how well it solves immediate problems, how well it works with existing tech, how easy to adopt, and of course some amount of hype loosely related to the aforementioned.

What doesn't play a huge role is how ugly the evolution of this tech is going to look 20 years down the line when the entire landscape has evolved.

It's a little tiresome when a young idealist comes into a 25-year-old field (I'm guessing the author is not much older than that himself), pisses on the hard work of thousands of people pushing web tech forward bit by bit over decades, and says it should be replaced wholesale because it's just rubbish, and then when someone tells them that you're welcome to try but you'll never get any traction in that Sisyphean task, he responds that people aren't engaging in a rational discussion.

> ...but rather is a confluence of the state of the market at various points in time...

> That is, we do not merely calculate in earnest to what extent tradeoffs are necessary or desirable, keeping in mind our goals and values, there is a culture around making such compromises that actively discourages people from even considering more radical, principled approaches.

If you can suspend your frustration for a moment, it seems that you and the author are both arguing for explicit consideration of the tradeoffs faced in the real world. Both sides of the field have a tendency to reduce this consideration to easily evaluated heuristics (eg "useless ivory tower wankery").

I think the authors earlier posts on generous reading apply here. There is plenty here to disagree with but noone gains much from taking sides and fighting to the death. Rather than getting wound up we could be having a much more valuable discussion of how market forces shape technology and how we might find ways to support and apply long shot research.

EDIT From the discussion you mention:

> What might otherwise be an interesting, nuanced discussion of the economics of technology adoption, network effects, switching costs, etc, is instead replaced with sloganeering like 'Worse is Better'.

> There may well be times where fighting for revolutionary change or finding some adoption story for completely new tech is a better path forward than incremental improvements. Questions like this can not be settled at the level of ideology or sloganeering...

Furthermore, when you are talking about something that solves the scope of problems which CSS solves, there is no way you can just sit down and design a better system

CSS has proved an awful system for layout, design and practically every domain it attempts to address over the last couple of decades. Having worked with both for a long time, here's the fundamental difference between HTML and CSS, as I think it's a useful comparison and highlights when worse is truly better, and when it is just worse:

HTML was limited and simple by design, and has gradually improved (the original worse is better ethos)

CSS was broken by design, and hasn't much improved (maybe in a few years with flexbox and grids, maybe).

I don't find it at all surprising that someone thinks CSS could and should be replaced, and I'm skeptical that if the best minds of our generation took another look at it, they couldn't find some far better ways to lay out content than an inconsistent and confusing box model with concepts like floats tacked on to it and lots of module proposals to try to glom on additional layout modes. I can't think of many problems CSS solves well, apart from separation of style/layout and content, even there at present we have div soup for grids instead of table soup - hardly a huge improvement on what went before. Some of the problems it attempts to solve are not even real-world problems and are of its own devise (for example the Cascading Style priorities in its name - what a bizarre focus for a layout language).

Perhaps the answer is a turing-complete layout language (personally I doubt it), or perhaps it's another more informed declarative one, but I'm quite sure we can improve on it, and the comparison to HTML is apposite, because that has stood the test of time rather well compared to its companion technologies.

It's a little tiresome when a young idealist comes into a 25-year-old field

We don't live in the best of all possible worlds, and to make it better, we sometimes have to take a step back from a local maximum and look at the bigger picture; that involves listening to 25-year-olds coming up with something better - most of the time they won't, but sometimes they will. If you find yourself not even listening, and more concerned with sunk costs, work already done, and expertise already gained, that's not healthy and not a convincing riposte.

"that involves listening to 25-year-olds coming up with something better"

That doesn't involve listening to them, it involves them putting in the work to develop something as a proof of concept, and that proof of concept evolving through feedback and collaboration into something people want to use.

To put it more crassly, people don't stop using something because others say it's shit, they stop using something when there's a better option on the table, and that better option needs to be more than just architecturally improved, it needs to be genuinely useful, and this is what people who twist "worse is better" fail to understand... legacy code sticks around when it's genuinely useful.

And to address the original author... yes, the idea with writing software is to get the job done, but if you want to work on tooling to improve that process be my guest.

That doesn't involve listening to them, it involves them putting in the work to develop something as a proof of concept, and that proof of concept evolving through feedback and collaboration into something people want to use.

Well I think it's always useful to listen - even if you disagree or think your interlocutor mistaken. Of course as you say ideas are worth a lot less than implementation, but the one point I fully agree with the original author on is this:

CSS is not best of breed, existing for 20 years does not make it good in any sense, and it is a terrible example of worse is better in the original sense. However arguing over "worse is better" is just going to end in arguing about what that vague phrase means, so I won't enter that particular rabbit-hole - I agree the original author misunderstood or has not encountered the original meaning.

In the case of CSS, what holds back adoption of alternatives is almost entirely browser-vendor inertia and the institutional barriers to producing a better solution, not some technical superiority of CSS, so what I object to in the parent comment is the implication that CSS won because it is technically superior to other layout methods and is complex because it deals with lots of complex domain problems which a 25 year old couldn't possibly fathom. It introduces needless complexity, doesn't even properly address the domain problems (design, layout, grids etc), was badly designed from the start and has become even more complex with age, and I'd argue it has succeeded mostly by riding on the coat tails of HTML.

Have you looked at Elm?

Only a quick glance after the previous article, I think it looks interesting, not something I would use right now, but interesting as an alternative to what we have in the web at present. I particularly liked this little page for playing with it:


His articles addresses your comments about the market: that they're being irrational by viewing their purchasing decisions without considering the impact it has on the portfolio - going full government bonds.

You're as much strawmanning his article as you're accusing him of strawmanning other points.

It's possible lots of people did put a lot of work in to styling on the web, and that it's still bad and we could do better if we started over. Of course it's going to take a lot of effort to get back to where a current project is, but that's the point: people who are using current projects should also be looking at their future costs (total cost of ownership over company/product lifetime), and take on a couple high risk investments if their projected long term cost is lowered by it. They're not doing that, because people like you show up and say "Hey, it'll never get done because it's a lot of work, so let's just keep hacking on a complexity explosion."

Of course a CSS replacement isn't going to immediately replace it. That's nonsense. Rather, the author is saying that if we started now, in 5-10 years, our choice to reduce complexity would have paid off, and we'll now pull ahead of where the hackpile that is CSS would have us at that time.

But if we never start, we're never going to get that growth, and have to keep hacking away at a mess forever.

> If there are 100 possible technical solutions to a given computing problem, the ones that solve the problem more comprehensively are naturally going to tend to be more complex, this complexity comes with an adoption cost, and that cost works against the likelihood of adoption.

The opposite is true. The best solutions are simple, not complex. Piling features one on top of other features to solve every problem as you encounter it gets you started faster than first thinking carefully about the problem space and then designing a solution. That's worse is better, or better: worse is quicker.

Stating that as an absolute truth is extremely naive. The cases for which that is true are the easy ones, you are very lucky if you have the choice to limit your work to things which have simple and elegant solutions. Most things that touch the real world have an irreducible complexity that you can't simplify with destroying the core value proposition: witness Unicode.

Sure, but that's not what we are talking about here. Given two solutions that solve the same problem, the simpler one is almost always the better one. A worse is better methodology produces a complex and worse solution, whereas you were saying that it produces a simple and worse solution.

And by the way, CSS is definitely not complex because of irreducible complexity.

If you think that solving all the problems CSS attempts to solve is simple then you probably only understand 5% of CSS.

Complex problems often have simple solutions but it takes a lot longer to get there than it does to create a complex solution.

CSS is like a second draft. We can do much better.

Simple is relative. CSS is far too complicated and ad-hoc for what it does.

But coming up with a simple solution takes much longer than coming up with a complicated one; and by the time you've come up with your simple elegant solution, your competitor has already beaten you to market with something worse but was actually "good enough" and no one cares about your solution. Worse is better is not a methodology, but an observation.

I have made this longer than usual because I have not had time to make it shorter. Blaise Pascal

Plus, I had to be done in ten days or something worse than JS would have happened. Brendan Eich

I certainly agree with that.

I'm just gonna go full Poindexter and say that Richard Gabriel did try to distance himself from worse is better, at first, but then he veered wildly in the following years between the two positions.

And, the fundamental idea of Worse is Better is so incredibly older than UNIX, even in the field of computer science; John Von Neumann said himself that the Von Neumann machine was a temporary workaround and that a better architecture would quickly replace it.

I live in a near constant fight in the rebel factions of Better and so far, I have to reckon we get constantly massacred by the Empire of Worse.

I'm a big advocate of "worse is better" - in fact I put it in my bio even at the risk of scaring off potential colleagues or employers. The real meaning of "worse is better" is of course subjective and complicated. And as has been pointed out in these comments, "worse is better" is not about creating "worse" software and then marketing it as "better". It is something much more philosophical.

In my eyes "worse is better" is about the mindset of approaching a task. It is about diving right in and learning through production - without being paralysed by the idea of introducing hacks or ugly design. It is the idea that, for the moment, there isn't a need to be worried about covering every edge case or possible failure option. It persuades you to focus on something simple and easy to explain, with a single purpose or intent. It is better to produce something (anything) and see where it takes you.

It also says how important it is to embrace contribution and collaboration. How important it is to, after some threshold, release yourself from feelings of ownership.

But if I had to nail down exactly why I believe "worse" is so successful ("better") it is because those that create "worse" software don't focus on the software - they focus on the idea. From that the software is painfully drawn. The software might suck, but I believe ideas are better. They are more persistant, easily explored, dynamic, and shareable than software. Ideas that are good, simple, and easily taught are far more important than well designed software. That is why they survive.

There are lots of programmers and hackers who don't believe in "worse is better". Sometimes you seen them on HN with a fantastic new programming language (or something) they have designed and built in isolation - perfect in every aspect (at least to them). Nothing quite hurts like their confusion when interest dwindles and their software is forgotten. All they had seen on HN were "worse" links every day, and after years they had provided "better" - to them it is criminal that it hasn't been picked and gained momentum.

Worse is better is not going away, and I think you can either engage yourself in it as a philosophy, or struggle.

"Sometimes you seen them on HN with a fantastic new programming language (or something) they have designed and built in isolation - perfect in every aspect (at least to them). Nothing quite hurts like their confusion when interest dwindles and their software is forgotten."

I suspect you've described Rich Hickey to a tee here. /s

The author's Closing Remarks give rather short-shrift to the nuanced thinking Dick Gabriel has contributed to this idea. Gabriel's "Worse is Better" page https://www.dreamsongs.com/WorseIsBetter.html concludes with:

"You might think that by the year 2000 I would have settled what I think of worse is better - after over a decade of thinking and speaking about it, through periods of clarity and periods of muck, and through periods of multi-mindedness on the issues. But, at OOPSLA 2000, I was scheduled to be on a panel entitled "Back to the Future: Is Worse (Still) Better?" And in preparation for this panel, the organizer, Martine Devos, asked me to write a position paper, which I did, called "Back to the Future: Is Worse (Still) Better?" In this short paper, I came out against worse is better. But a month or so later, I wrote a second one, called "Back to the Future: Worse (Still) is Better!" which was in favor of it. I still can’t decide. Martine combined the two papers into the single position paper for the panel, and during the panel itself, run as a fishbowl, participants routinely shifted from the pro-worse-is-better side of the table to the anti-side. I sat in the audience, having lost my voice giving my Mob Software talk that morning, during which I said, "risk-taking and a willingness to open one’s eyes to new possibilities and a rejection of worse-is-better make an environment where excellence is possible. Xenia invites the duende, which is battled daily because there is the possibility of failure in an aesthetic rather than merely a technical sense."

Decide for yourselves."

Honestly, I think it's deeper than this. Even today, we have people who intentionally seek out ugly, imperfect designs because they think they work better. We have a cultural idea (at least, I can speak for the United States, hopefully some others will chime in) that "real" work is ugly, gross, and messy. Heck, I even had a friend who chided people who played a particular version of CounterStrike because it was "Too polished". We're measuring in one dimension something that's two dimensional: Work vs. Not work and Beauty vs. Uglyness. We steep in this cultural broth of the idea that "real work is ugly", and then wonder why the tools we decide to do "work" in are "ugly".

For the record I think that writing software commercially should only consider the bottom line. What you do in your own time is your business; that's why open source is good for innovation through revolution. Ethics for legal and medical professions is a broken analogy because all of their ethical considerations focus on outcomes for clients/patients, not the details of how their services are executed.

"Perfect is the enemy of better"

"This 'Worse is Better' notion that only incremental change is possible, desirable, or even on the table for discussion is not only impractical, it makes no sense."

It makes all kinds of sense. Consider for a second the sort-of parable of Chesterton's fence. As G.K. Chesterton wrote:

"In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don’t see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.'"

It's basically that whole Spolsky think about why you shouldn't rewrite code from scratch[1]:

"Back to that two page function. Yes, I know, it's just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I'll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn't have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it's like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.

When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."

Broadly speaking: you, for ANY definition of you, are unable to design a perfect system at the first go. You will not account for all use cases, there will be edge cases you don't consider, so on and so forth. But it goes further than that: if you want to design something better than C++, you need to understand why so many people use C++. If you want to replace CSS, you need to understand why CSS is popular. If your thinking on the matter hasn't evolved much past "CSS is unnecessary" and blaming a single catchy essay for every decision you disagree with, then instead of writing essays about how everyone else is doing it wrong, maybe you should spend more time trying to understand what everyone is trying to do, why they're trying to do it and what resources they have to do it with. And THEN maybe I'll let you tear that fence down.

1) http://www.joelonsoftware.com/articles/fog0000000069.html

"If you want to design something better than C++, you need to understand why so many people use C++."

No, you don't. You just need to design something better than C++.

"If you want to replace CSS, you need to understand why CSS is popular."

No, you don't. You just need to replace CSS.

Now, having people adopt better-than-C++ and better-than-CSS might take a little bit of the psychoanalysis (marketing?) you propose. But understanding why someone uses C++ is not really relevant at all in order to design a superior replacement.

I assure you, the designers of Java did not really care why people used COBOL for so much banking code, and yet here we are, with Java having supplanted COBOL for much of that work.

Every time someone trots out Chesterton's fence, I trump it with Sturgeon's law.

Edit: heck, an even better example is JavaScript app frameworks. Do you think their designers understand why people programmed in Smalltalk or Swing or VB.NET or Delphi? No. Judging from their approach and idiom, obviously not. And yet here we are.

""If you want to design something better than C++, you need to understand why so many people use C++." No, you don't. You just need to design something better than C++."

And how do you define "better than C++"? C++ didn't get to be as popular as it is for nothing. It got to be as popular as it is because it is a very good tool for solving certain classes of problems. If you don't understand why people still choose C++ for new projects, you're going to have a tough time building something that will change those decisions. And if you think people only choose C++ for new projects because they're idiots, frankly, you're the idiot.

"Everybody is an idiot" is a much less likely hypothesis than "I'm missing something". Before you throw out a popular tool whose appeal you don't understand, you would be well advised to spend some time comprehensively falsifying "I'm missing something". Hence, "If you want to replace X, you need to understand why X is popular."

It wasn't my example. But there is no a priori reason why "something better than C++" must be designed by someone with a deep understanding of C++. This holds for anything you might put in place of C++:

- must someone who designs a car understand why people ride horses?

- must someone who designs a phone understand why people use smoke signals?

- must someone who designs a gun understand why people use swords?

Sometimes, the vast majority of people are just wrong. Look at people building McMansions with complicated roof lines, useless "bump-outs" and other crap details, when any thinking building professional knows this is wrong.

"Now, having people adopt better-than-C++ and better-than-CSS might take a little bit of the psychoanalysis (marketing?) you propose."

I don't mean marketing, I mean actually designing something better. A lot of people don't actually know what better means. It's like the Betamax myth. A lot of people believe that Betamax was the technologically superior version, and it lost because of other reasons. This is bunk. Betamax's edge in picture quality over VHS, all else being equal, was slim to nonexistant. But Betamax was locked to a relatively fast tape speed to keep quality from degrading. VHS let people run at much slower tape speeds in order to fit more video on the same tape. Betamax allowed you record about an hour worth of video programming on a standard tape. VHS originally allowed two, and then they just said the heck with it and let you record four hours of bad-looking video. It turns out, though, that it's far more important to most people to have the ENTIRETY of a movie or (in the case of a four-hour tape, an NFL game) than it was to have the part of it you recorded look pristine.

The lesson everyone draws from Betamax is that we can't have nice things and technological superiority is trumped by other considerations. The lesson everyone SHOULD draw from Betamax is that to win, you have to be better at the right things to actually be better.

Betamax and VHS are incrementally different, not radically different as the blog post discussed. Radically different would be VHS vs. streaming video, and I suspect there is no technological overlap between the two. Incrementalists would have us stream video to tape decks, or something.

>Now, having people adopt better-than-C++ and better-than-CSS might take a little bit of the psychoanalysis (marketing?) you propose. But understanding why someone uses C++ is not really relevant at all in order to design a superior replacement.

The above makes no sense. Understanding why they use C++ is the first and most basic step in designing a REPLACEMENT.

If what you create doesn't cover their use cases then it's not a replacement, it's just a new language.

So the web can't be a replacement for CICS because it wasn't designed as such. Gotcha.

No, you didn't really got anything. You just put words in my mouth with a quick snarky response that misunderstoods what I wrote.

Whether your replacement handles the use cases people need the old product for intentionally or unintentionally (in the case of the web vs CICS) doesn't matter.

What matters is that your design DOES handle those use cases and those needs.

You can obviously end up designing something as a replacement for X without specific intent or even knowledge of X.

But you can't design something as a replacement for X if it doesn't handle the needs that people use X for.

That's a completely different point than the one you made originally. If you want to avoid being misunderstood, then make the point you actually intend - "a C++ replacement must function as C++ does" instead of the one you claim - "understanding C++ is the first step in replacing C++"

How do you propose "must function as C++ does" happens if not through understading C++'s uses?

As I said, it can also be done unintentionally (you create a new language without studing how C++ is used, and it gets adoption in place of C++), but this is far less likely and quite random.

If one really wants to create a replacement for a language, he should very much study the language he wants to replace, and find what he needs to provide in his new language and what he can improve.

The thing you added that, "no, he can just create a replacement" (without needing to study the previous language), might very well be possible in theory but it's very much improbable.

If you want adoption from C++ users and from projects C++ is used and for the kinds of stuff C++ excels, you pragmatically need to study C++ and how it's used, period. That was the case for Rust, for D, for Java and C# earlier, and of course C++ (who studies C with that intent). That was also not the case for Go, and that's one reason why Go (by Pike's own admission) failed to gain traction from C++ users.

Of course success is not guaranteed, but merely making a "superior language" is not a way to get C++ users off C++ (or any other language).

For the vast majority of people, a car functions as a horse used to before there were cars -- it functions better, in fact. Yet car designers are not required to be intimately familiar with horse care.

You are pointing out marketing issues. I agree that if you don't actually have a better C++ than C++, then you will have difficulty convincing most C++ users that they should switch.

By the same token, if you actually do have a better C++ than C++, it will be evident in the adoption of the technology.

Also at this point, if Java, Go, Objective-C, D, and C# have not convinced the current user base of C++ to use other things, then perhaps it is no more possible or necessary to convince these remaining users to switch than it is to convince modern horse riders that they shouldn't bother to ride, own, or breed horses. For them there cannot be a "better C++ than C++."

There's a pretty big difference in that the web has far more features.

Unless, you intend for C++'s replacement to have even more features than C++, which hardly seems even possible.

Must a replacement perform all of the functions of the thing that it replaces in order for it to be a replacement? Let's interview our good friend the vacuum tube to find out more.

It's clearly possible. C++ lacks pattern matching, for instance. Whether adding any given feature is advisable is another question...

Isn't twitter also an example of worse is better. Why send the people willing to engage with your argument to twitter then?

I always associated worse is better with VHS vs Betamax rather than incrementalism i.e. utility realised > potential utility

But seriously, it takes some nerve to write a blogpost against Worse is Better that's on every aspect worse than the paper that originally codified Worse is Better.

Oh wait, no, is this lack of quality justifiable because this content-free 2 minutes read is more efficient and easier to write than a serious exploration of the consequences of whatever the author was ranting about?

I'll probably get shadow banned for agreeing with this, but who cares anyway.

The article ignores LEAKY ABSTRACTION. that's all there is to say. Now lets have some marketers tell us why that's a good thing and we can get back on this irony loop, this irony loop, this irony loop.

"Worse Is Better" has evolved into a monster.

Originally, it looked like this:

* MIT philosophy: never compromise on correctness, even in bizarre corner cases. Aim for conceptual beauty to a programmer.

* New Jersey (Bell Labs) philosophy: comprise on correctness for simplicity or performance.

In 1985, six years before "Worse Is Better" was originally written, the "New Jersey" attitude was probably more useful. Most people, if they wanted to write acceptably performant software, had to do it in assembly. C was less of a leap, conceptually and in terms of average-case performance, than Lisp was. People who'd been writing assembly programs for years could learn how to write performant C. Writing performant Lisp would be much harder. A contemporary Common Lisp executable is at least 40 MB (obviously, that wasn't the case in the 1980s); at that time, 1 MB was a powerful machine. "Worse is better" worked in the 1970s and '80s. If every piece of computing work had to be perfect before it could be shipped, we'd be far behind where we are.

Also, quite a number of the original Unix programs were for natural-language processing (at a level that'd be primitive today) and paper formatting. With the resources of the time, it would've been impossible to get much of that stuff perfectly right anyway.

Bell Labs wasn't full of the anti-intellectual idiots who invoke worse-is-better, lean-startup tripe today. They knew what they were doing. They knew the compromises they were working under. They bet on Unix and C rather than Lisp machines, and they were right. In 2014, thanks to our ability to stand on the shoulders of giants that were built using C, we have machines that can efficiently run code in pretty much any language, so the C programmers and the Lispers have won. At least, on that front.

However, the "worse is better" lesson doesn't apply nearly as well in 2014. We can do about 500,000 times as much computation per dollar as we could in 1991. That's 500,000 times (at least!) as many opportunities for things to go wrong. A bug that happens once per 100 billion operations used to be negligible and now it's often not.

Unfortunately, we have an industry beset by mediocrity, in which commodity developers are managed by commodity business executives to work on boring problems, and low software quality is simply tolerated as something we'll always have to deal with. Instead of the knowing compromise of Bell Labs, "worse is better" has evolved into the slipshod fatalism of business people who just assume that software will be buggy, ugly, hard to use, and usually discarded after about 5 years. Yet we're now in a time where, for most problems, we can affordably do them correctly and, because things happen so much faster now, we often put ourselves and our businesses at serious risk if we don't.

It is about economics. The just-good-enough wins against excellence, because it is cheaper. When excellence finally pays of the good-enough competition already was profitable for years and solved five more problems.

What was good enough in the 80s is not good enough in 2014. Our views about good enough changed. For example, security requirements are much higher today.

The question is not if Better or Worse is better. The question is what is good enough?

> The just-good-enough wins against excellence, because it is cheaper.

Its not clear to me that this is the case anymore. The whole startup thesis is that a small, excellent team can outperform vastly larger teams of average people. This is the opposite of how it was in the 80s.

This is just part of larger sociologic problem. It seems that "Worse Is Better" is grand rule how our society has been built. Take an example of energy, human race powers itself with dead dinosours which are based on short term(too long for me) gain which is buggy, ugly and hard to our nature. Learn from mistakes to build better future, and software. +Velho

This is just part of larger sociologic problem. It seems that "Worse Is Better" is grand rule how our society has been built. Take an example of energy, human race powers itself with dead dinosours which are based on short term(too long for me) gain which is buggy, ugly and hard to our nature. Learn from mistakes to build better future, and software. @Velho

The font used on this page is difficult to read. I'm running current Chrome on Win7.


note: I checked that screenshot on another machine and it didn't seem so bad.... so maybe this is also something strange with the monitor / res I'm using... still thought I'd mention it.

The same website on Chrome, OS X, Retina:


Thanks for taking the time to take a screenie and share it. Looks like I need to investigate why some sites fonts look so bad on this machine.

Firefox, Windows 7 - I don't like the text either http://i.imgur.com/s3V1dwf.jpg

I agree with the points in 100%

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact