
What “Worse is Better vs. The Right Thing” is really about (2012) - nostrademons
http://yosefk.com/blog/what-worse-is-better-vs-the-right-thing-is-really-about.html
======
mbrock
Even more generally, I think there's something to be said about the difference
between two different mentalities, presented here in caricature form:

1\. The world is tainted. Everyone is doing it wrong. I'm thinking about a
great way to solve things. You people should stop doing the bad thing. Come
on, everyone. Let's do things right. Why is everything so bad? Why is life
unfair? Why is our legacy so messy and complex? Let's start over. Let's set
things straight. Justice, peace, correctness, truth, beauty. If we just think
very hard, and come to a consensus, then we can implement something much, much
better.

2\. The world is exactly the way it is. I'm not exactly sure why. Historians
are working on it. The institutions that we exist within distribute power in a
certain way. Matter is heavy and resistant to change. I don't really know how
to "improve" the world, and if I tried, I may just make it worse. Why are you
talking about abstract nonsense like "justice" and "truth"? This is just how
it is. This is what we have to work with. We are lucky if we can make a few
incremental improvements.

As in so many cases with two extremes that end up fighting unproductively, the
vast middle ground is where the interesting stuff happens.

David Chapman... incidentally a contributor to the UNIX-HATERS handbook,
decades ago... has a terminology with which I think my persona #1 would be
called an eternalist and persona #2 a nihilist, roughly speaking.

And he wants to sketch out a persona #3, who operates under what he calls the
fluid mode. That's someone who understands the viewpoints of both eternalism
and nihilism, recognizes them both as incomplete, and then valiantly works to
create things and change the world in a kind of liberated way.

So, we're not going to resolve the question in favor of either #1 or #2. Those
personas are like the two daemons on the shoulders of anyone who does
programming... or politics... or law... or urban planning... or economics...

The world would be very different if Lisp and Smalltalk hadn't existed, so
it's not like cutting edge cool stuff is worthless just because it doesn't get
adopted. I always like to see people encouraged to learn about these systems
even if their "ecosystems" aren't "mature" for "web scale" or whatever. It's
almost a matter of respect.

~~~
nickpsecurity
It might not be that we need to do one, the other, or a middle path. Instead,
we can just focus through one of the mindsets on aspects of the problem.
Remember that the projects are usually a collection of things. Each has to be
solved independently with an integration. Then there's extensions. So, we can
Right Thing key pieces of it even if not all of it. Further, as Gabriel noted,
we can start with Worse is Better to get it moving then bring aspects into
Right Thing status.

So, it's not all or nothing. My specialty, high assurance security, is Right
Thing taken to the extreme for reliability or security. By the 1980's, they
started kernelizing designs due to market forces making it impossible to apply
to whole system. By the 1990's, they started doing it incrementally to
increase uptake and financially justify the assurance work. Today, we've
wisened up to try to concentrate our work on things that amplify overall
correctness, reliability, or security. Type systems, compilers, kernels,
protocols like Paxos (or Survivable Spread), model-based generation...
anything where a little investment goes a long way.

Then we can smile knowing what people are using defiantly against our
recommendations contains the results of seeds we planted earlier. Things got
better even as they got worse. :) Just gotta figure out how to do that more
often...

~~~
mbrock
Ah, middle-pathing my middle-pathing—well played!

Interesting stuff. My story roughly speaking is that I came out of Chalmers
all gung ho about dependent types and probably correct functional programming,
and now I work on a JavaScript startup with not even many unit tests (which is
partly my fault).

[I meant to say provably, but the autocorrect typo is illuminating.]

Generally I'm interested in logic in computer engineering, and I think the
idea of safe kernels is great.

The Xmonad architecture seems to me like it might be a good model for coming
architectures... Symbolically, it's nice and you can tell from just the name:
X symbolizes UNIX and worse is better; monad stands for purity and
correctness.

~~~
nickpsecurity
"My story roughly speaking is that I came out of Chalmers all gung ho about
dependent types and probably correct functional programming, and now I work on
a JavaScript startup with not even many unit tests (which is partly my
fault)."

I tell people to focus on just what delivers the most value. I came up with a
list of the few things that are empirically proven to benefit software
quality:

[http://pastebin.com/xZ6m4T8Z](http://pastebin.com/xZ6m4T8Z)

So, for Javascript, you might do code reviews, use any static analyzers you
know, decompose into functional style, have some interface checks encoding
assumptions, and so on. Simple techniques that take little time, save you much
time, and boost quality greatly. Plus, recall that you can do FP in many
languages by subset and style. ;)

Note: This doesn't even count my old strategy of making a safe, macro-enabled
4GL that compiles to a target 3GL. You can code up shit in ML or Dependent
language with constructs that can map 1-to-1 to JavaScript. Then, your tool
produces Javascript from whatever you really code in. You deliver that w/out
mentioning other tool. Best of both worlds.

I caught that: "I meant to say provably"

Not that: "but the autocorrect typo is illuminating" Lol nice catch. The
autocorrect might have corrected an entire field's thinking rather than one
person's spelling. You should contact the authors about the discovery of AI in
their software. ;)

"Xmonad architecture"

I wasn't aware of it and I don't do functional programming yet. I'll have to
add it to my list of things to check out.

Regarding provably-correct FP, what was your background, tools or projects? I
might learn something or have something your interested in as I collect work
in high assurance field. Have quite a few on that topic including some
explorations but lack specialist expertise to really evaluate it.

~~~
mbrock
Yeah, focusing on the highest value is great advice. Nice list!

Regarding FP, I think it's cool that there's so much activity and open source
stuff going on, and from the community perspective it's a fresh angle for
talking about correctness and reasoning and stuff.

Xmonad was a great community project for "teaching the virtues." Being a
hacker's window manager, it was all about extensibility and configuration,
kind of like this coral reef of experimentation. John Hughes used it as a big
example when he gave talks on "real world" FP. The architecture is basically a
pure core of compositional/algebraic/combinatoric stuff, with around 100% test
coverage including lots of QuickCheck properties... surrounded by an
interpreter layer that realizes this stuff into X11 commands.

So I'm really interested (not scholarly or academically or even
professionally, just hobbyishly) in being able to express domains with actual
logic. And since my interest's trajectory starts at Haskell and goes through
Agda, I'm fascinated by the ability to unify proofs and programs (due to the
Curry-Howard equivalence of typed lambda programs and constructive logic
proofs).

I'm kind of waiting for more of that stuff to start growing in the open source
/ startup / GitHub / HN ecosystem. It ought to be very fruitful. With these
new dependent type languages, proving becomes like hacking—you don't need to
start as an academic logician, and you can bypass philosophical arguments
about the definition of truth, because you just want to engineer a type-
checking proof. So I think people could have fun with it. But there's a lot of
alien notation and scary stuff...

Let me nerd out for a while since you seem interested, and I wanna spread the
word about some things.

QuickCheck of course is a tool originally for Haskell that lets you very
easily verify equational properties using type-directed random value
generation. It's used in tons of real Haskell projects and it's super awesome.
Recently I was writing a thing to synchronize Reddit comment state with a Git
repository, and I used QuickCheck to verify that some JSON conversion things
were isomorphic—this is a typical case where QuickCheck can instantly find
tricky bugs and you barely have to write anything:

    
    
        quickCheck (\x -> parse (render x) == x)
    

gives you a powerful test suite, even if the types have lots of nesting.

Then, a later project that's not as well known is QuickSpec.

[http://www.cse.chalmers.se/~nicsma/papers/quickspec.pdf](http://www.cse.chalmers.se/~nicsma/papers/quickspec.pdf)

[https://github.com/nick8325/quickspec](https://github.com/nick8325/quickspec)
(pretty good README)

It's kind of the inverse of QuickCheck: you would tell it to search for true
equations involving parse and render, and it would _discover_ the isomorphic
property. It does limited exhaustive search on application trees, and
randomized testing for verifying equations. So it can be a great starting
point for reasoning about your modules, if they're written in a way that makes
them amenable to algebraic reasoning.

And then the next step is HipSpec.

[http://www.cse.chalmers.se/~jomoa/papers/hipspec-
atx.pdf](http://www.cse.chalmers.se/~jomoa/papers/hipspec-atx.pdf)

[https://github.com/danr/hipspec](https://github.com/danr/hipspec)

It takes equations about Haskell functions, and then uses off-the-shelf
automated theorem provers, combined with automatic lemmas from QuickSpec, to
generate formal proofs automatically.

All this is made to work on actual Haskell code, and I think it points out a
really cool path for getting hackers to bother with proofs and equations.

What I haven't seen so much yet, and this is one of my biggest curiosities /
vague ambitions, is how to take different diverse domain models and extract an
algebraic core.

Xmonad did it for window managing, which is super nice. I'd love to see more
of that in different domains.

As I see it, that's what's going to make more people see a tangible value in
otherwise abstract seeming stuff like equational reasoning.

This is already long enough, but a quick pointer to another eccentric interest
I have, due to my brother who's great at finding value in seemingly obscure
realms, and sticking with it even though people say he's crazy...

Interactive fiction is all about modelling objects and situations in a way
that's comprehensible to humans, especially if you look at e.g. Inform 7,
which uses English grammar as the basis for its declaration language, and has
a very nice declarative modelling paradigm, quite novel stuff.

So that in itself is awesome, and I'd love to see some non-fiction programs
written in Inform 7. It compiles to a virtual machine that as far as I know is
decently fast and has I/O capabilities.

Then there's work by Chris Martens, which has been linked on HN at some point,
on using linear logic as the basis for interactive fiction and general game
prototyping.

> _My thesis project is a programming language for the design of interactive
> narratives and game mechanics. The language is based on forward-chaining
> linear logic programming, a way of declaratively describing state change.
> This methodology makes it feasible to encode generative rules that create
> procedural content for interactive simulations that give rise to emergent
> narratives._

> _The language semantics ' basis in proof theory enables a structural
> understanding of these narratives, making it possible to analyze them for
> concurrent behavior among multiple agents. On a larger timescale, I imagine
> growing the technology underlying this language into a high-level sketching
> tool for game designers, usable for rapid prototyping and iteration._

[http://www.cs.cmu.edu/~cmartens/](http://www.cs.cmu.edu/~cmartens/)

So my crystal ball shows some really fascinating and awesome stuff coming out
of that whole tradition in the somewhat near future...

Um... let's see if I'm even allowed to post this long comment...

------
marktangotango
This article is a really good discussion of Richard Gabriels essay, 'The Rise
of Worse is Better', which should be, imo, required reading for all
technologist (along with 'Mythical Man Month' and 'Soul of a New Machine').
The authors testimony resonated with me, once I realized that I'd never get
adequate requirements, or time to fully implement a feature, and just solve
the problem in front of me, is when I became happier in my career.
Consequently more successful too.

------
0xcde4c3db
This reminds me of a quotation from an interview with a George W. Bush
administration official (widely understood to be Karl Rove, though I'm not
sure that's ever been substantiated) that made the rounds raising hackles and
inspiring slogans [1] in the left-leaning blogosphere ca. 10 years ago. The
emphasis here is more on study versus action here, but I think there's clearly
some parallel:

> The aide said that guys like me were "in what we call the reality-based
> community," which he defined as people who "believe that solutions emerge
> from your judicious study of discernible reality." I nodded and murmured
> something about enlightenment principles and empiricism. He cut me off.
> "That's not the way the world really works anymore," he continued. "We're an
> empire now, and when we act, we create our own reality. And while you're
> studying that reality -- judiciously, as you will -- we'll act again,
> creating other new realities, which you can study too, and that's how things
> will sort out. We're history's actors . . . and you, all of you, will be
> left to just study what we do." [2]

[1] For a time, it was semi-common for left-leaning blogs to advertise
themselves as part of "the reality-based community".

[2] [http://www.nytimes.com/2004/10/17/magazine/faith-
certainty-a...](http://www.nytimes.com/2004/10/17/magazine/faith-certainty-
and-the-presidency-of-george-w-bush.html)

~~~
rntz
Taken charitably, it sounds like the official is describing the state-level
equivalent of _operating inside an enemy 's OODA loop_ (cf
[https://en.wikipedia.org/wiki/OODA_loop](https://en.wikipedia.org/wiki/OODA_loop),
brought to my attention and reasonably well-explained by
[http://thefederalist.com/2015/12/16/military-strategist-
expl...](http://thefederalist.com/2015/12/16/military-strategist-explains-why-
donald-trump-leads-and-how-he-will-fail/)) - roughly, acting so quickly as to
make the enemy's response obsolete by the time they go through with it.

Worse-but-Better solutions make it to market dominance before the Right Thing
has a chance to get off the ground; and then the Right Thing is no longer the
right thing, because the reality has changed.

Taken uncharitably, of course, it sounds like puffed-up nonsense, which
perhaps it is.

------
gerbilly
I guess i'm a 'right thing' developer.

I'm sure there are multiple constraints besides economic viability that could
drive you to seek the 'right thing.'

I've noticed there are two types of developers: those that find fighting fires
rewarding and enjoy the accolades that come with 'saving the day' and those
that find fighting fires punishing, and that perceive requests to fix code in
a hurry to be an intrusion.

I'm going to call them 'extrovert' and 'introvert' programmers.

I am an introvert programmer, and for me the 'right thing' has always meant
code that I can put in production and forget about.

I also believe that introverts are in the minority, which means that when on a
team introverts have to fight fires along with all the extroverts, even though
their own code runs mostly without incident.

------
donatj
The argument that the Worse is Better solution is flawed imho is incorrect.
It's far more perfect than The Right Way, its just a solution to a more
perfect problem. Instead of increasing the complexity of the program decrease
the complexities of the problem itself and you end up with a much more
reasonable result.

------
DonaldFisk
Those interested, who have read RPG's original article, might want to learn
more about his example of "the right thing", PC-lusering, by reading PCLSRing:
Keeping Process State Modular by Alan Bawden
([http://fare.tunes.org/tmp/emergent/pclsr.htm](http://fare.tunes.org/tmp/emergent/pclsr.htm)),
and then perhaps learn more about ITS, which can still be run on a PDP-10
simulator: [http://its.victor.se/wiki/](http://its.victor.se/wiki/) Those who
don't want to go too far down that particular rabbit hole could do worse than
read the first third of Steven Levy's Hackers, which is about the MIT AI Lab.

The reason ITS is rarely used now is more mundane than you might infer from
RPG's articles: it was machine specific, being mostly written in MIDAS
assembly language, and needs a PDP-10 to run it on, and DEC stopped making
them. Unix is written mostly in C and is portable.

------
mlangdon
To pick on one paragraph, it's not that the left thinks "The Market" is evil.
It's neither good nor evil (amoral, not immoral), so it's irrational to count
on it to be a force for good in the world.

~~~
vezzy-fnord
Amorality in an economic system is a good thing. It means that participants
impute their values within the institutional framework. An economic system
actually designed to be "moral" (I can't think of anything other than Marxism-
Leninism, everything else is hypothetical cost-the-limit-of-price, anti-usury,
Social Credit and mutual aid arrangements) would be not only inflexible, but
such a morality would necessarily emanate from a top-down institution that is
immoral itself.

~~~
TheOtherHobbes
I'm not clear how that would be different to the top-down economy and moral
pretensions of bankers and investors that we have now.

When phrases like "moral hazard" are used to describe gambling risk, "amoral"
is hardly the most apt description of a system that actually tries to define
social and political morality for the entire world of work and business.

The reality is that mainstream economics has always been more a branch of
moral philosophy than of empirical science. It's a tool of persuasion that
tries to propagate its values through rhetoric and the use of economic,
political and physical force.

That's quite close to the usual definition of a priesthood. It's only "amoral"
in the sense that the ethics of the priesthood are quite alien to those of
many adult humans.

~~~
mlangdon
This is what I'm saying, only if you think unfettered free marketeering is an
unalloyed good would you propose that the antithetical position is that the
market is evil.

If I say I want restrictions on the market so that our planet is still
liveable in 2100, I am not saying the market is evil. I'm merely stating my
moral (in that there is a value judgment) position in contrast to the "free"
market moral position. If I say that unfettered markets lead to evil, I'm
merely contending with that value judgment, not whether there should a market
in general. There's an incredible amount of space between a rampant
libertarian market and Communism. It's childish to pretend otherwise.

------
mempko
I'm going to repeat the comment I made on the website here.

I think the history of technology is this. Grand visions that are publicly
funded (Computers, The Internet, etc) have been since privatized and destroyed
by the market.

I don't think you should compare Smalltalk vs Linux, but Alan Kay's Desktop
GUI and Dynabook to today's UIs and the iPad.

It isn't a history of competing ideas that the market choose is correct. It is
the history of grand ideas incubated in the public sector and then further
distilled and misunderstood by the private sector.

The reality is the Market has little in making the decision between the grand
idea and the distilled one. Consumers never had the opportunity to decide.
Consumers were always given worse ideas to choose from to begin with.

Just like when you walk into a super market. You are given options that were
decided for you. That were filtered through thousands of decisions made
outside the market beforehand.

------
wellpast
I think there can exist a perhaps rarer beast that values BOTH perfection and
viability.

What this one does is keep his/her obsessive-compulsive eye on what is the
perfect goal, knowing full well it may never be reached, then she turns to
making the _process_ itself perfect -- where a perfect process is defined as
this: perfectly viable _while_ maintaining a minimal distance to the perfect
goal.

------
qwertyboy
"Worse is better" is an incomplete sentence.

Different folks complete it in different ways. Some say that worse is better
than complex. Others claim it's better than slow. Others still that it's
better than incompatible. The point of the posted link is that worse is better
than nothing. One can hardly argue with that.

~~~
rntz
> The point of the posted link is that worse is better than nothing.

No, that's inaccurate.

For instance, one example the article used to support WiB was that x86 (a CISC
architecture) beat out RISC architectures. There were many RISC architectures:
MIPS, SPARC, DEC Alpha, PA-RISC. So it's not a case of worse is better than
_nothing_. x86 won against real competition, because it took advantage of
evolutionary pressures.

~~~
qwertyboy
The article contains several examples for the different interpretations. CISC
vs. RISC is an example of "worse is better than incompatible". Unix vs. lisp-
machine is an example of "worse is better than complex". But the bottom line,
for this specific author, is that worse is better than nothing.

------
rumcajz
Nice essay. But framing the problem in terms of evolutionary vs. counter-
evolutionary is not useful. Of course the evolutionary is going to win. It
wins by definition. Evolution is the survival of the fittest. Being
evolutionary means betting on the winner.

------
nickpsecurity
He beat me to it as I'm mentally working on a similar essay. Backward
compatibility and shipping pressure I already covered a lot in my posts on
Schneier's blog elaborating on this. See Steve Lipner's Ethics of Perfection
essay for a great take on "ship first, fix later" mentality. He had previously
done a high-assurance, secure VMM. So, he had been on both sides.

[https://web.archive.org/web/20150319122817/http://blogs.micr...](https://web.archive.org/web/20150319122817/http://blogs.microsoft.com/cybertrust/2007/08/23/the-
ethics-of-perfection/)

On backward compatibility, you need to explore lock-in and network effects.
These are the strongest drivers of the revenues of the biggest tech firms.
Once you get the market with shipping, people will start building on top of
and around your solution. They get stuck with it after they do that enough to
make it hard to move. Familiarity with language or platform matters here, too.
The economics become more monopolistic where you determine just enough
additions to keep them from moving.

I agree with a commenter there that it needs a OpenVMS tie-in: a great example
of Right Thing vs Worse is Better that _won_ in market. While their management
was good. ;) It had better security architecture, individual servers went
years without reboot, mainframe-like features (eg batch & transactions),
cross-language development of apps, clustering in 1980's, more English-like
command language, management tech, something like email... the whole kitchen
sink all integrated & consistent. Reason was it was a company of engineers
making what they themselves would like to use then selling it to others to
sustain it. Also mandated quality where they'd develop for a week, run tests
over weekend, fix problems for a week, and repeat. That's why sysadmins forgot
how to reboot them sometimes. ;)

[https://en.wikipedia.org/wiki/OpenVMS](https://en.wikipedia.org/wiki/OpenVMS)

Here's a few others that fall under Cathedral and Right Thing model that got
great results with vastly fewer people than Worse is Better and/or were
successful in the market. Burroughs and System/38 still exist as Unisys MCP
and IBM i respectively. Lilith/Oberon tradition of safe, easy-to-analyze, and
still fast lives on in Go language designed to recreate it. There's nothing
like Genera anymore but Franz Allegro CL still has a consistent, do-about-
anything experience. QNX deserves mention since it's a Cathedral counter to
UNIX where they implemented POSIX OS with real-time predictability, fault-
isolation via microkernel, self-healing capabilities, and still very fast.
Still sold commercially and was how Blackberry Playbook smashed iPad in
comparisons I saw. They once put a whole desktop (w/ GUI & browser) on a
floppy with it. Throw in BeOS demo showing what its great concurrency
architecture could do for desktops. Remember this was mid-1990's, mentally
compare to your Win95 (or Linux lol) experience, and let your jaw drop. Mac OS
X, due to Nextstep, could probably be called a Cathedral or Right Thing that
made it in market, too.

[http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...](http://www.smecc.org/The%20Architecture%20%20of%20the%20Burroughs%20B-5000.htm)

[https://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf](https://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf)

[https://en.wikipedia.org/wiki/Lilith_%28computer%29](https://en.wikipedia.org/wiki/Lilith_%28computer%29)

[http://www.symbolics-dks.com/Genera-why-1.htm](http://www.symbolics-
dks.com/Genera-why-1.htm)

[http://www.qnx.com/products/neutrino-rtos/neutrino-
rtos.html...](http://www.qnx.com/products/neutrino-rtos/neutrino-
rtos.html#technology)

[https://youtu.be/cjriSNgFHsM?t=16m5s](https://youtu.be/cjriSNgFHsM?t=16m5s)

So, more food for thought. The thing the long-term winners had in common is
that (a) they grabbed a market, (b) they held it long enough for legacy
code/user-base to build, (c) incrementally added what people wanted, and (d)
stick around due to legacy effect from there. Seems to be the only proven
model. It can be The Right Thing or Worse is Better so long as it has those
components. So, we Right Thing lovers can continue to trying to make the world
look more Right. :)

------
walid
This article is nothing more than ideological bullshit.

------
geertj
A new example of worse is better vs the right thing: the containers movement
(esp Kubernetes) vs OpenStack.

Before that, (the early) AWS vs Grid computing/OGF.

