Even more generally, I think there's something to be said about the difference between two different mentalities, presented here in caricature form:
1. The world is tainted. Everyone is doing it wrong. I'm thinking about a great way to solve things. You people should stop doing the bad thing. Come on, everyone. Let's do things right. Why is everything so bad? Why is life unfair? Why is our legacy so messy and complex? Let's start over. Let's set things straight. Justice, peace, correctness, truth, beauty. If we just think very hard, and come to a consensus, then we can implement something much, much better.
2. The world is exactly the way it is. I'm not exactly sure why. Historians are working on it. The institutions that we exist within distribute power in a certain way. Matter is heavy and resistant to change. I don't really know how to "improve" the world, and if I tried, I may just make it worse. Why are you talking about abstract nonsense like "justice" and "truth"? This is just how it is. This is what we have to work with. We are lucky if we can make a few incremental improvements.
As in so many cases with two extremes that end up fighting unproductively, the vast middle ground is where the interesting stuff happens.
David Chapman... incidentally a contributor to the UNIX-HATERS handbook, decades ago... has a terminology with which I think my persona #1 would be called an eternalist and persona #2 a nihilist, roughly speaking.
And he wants to sketch out a persona #3, who operates under what he calls the fluid mode. That's someone who understands the viewpoints of both eternalism and nihilism, recognizes them both as incomplete, and then valiantly works to create things and change the world in a kind of liberated way.
So, we're not going to resolve the question in favor of either #1 or #2. Those personas are like the two daemons on the shoulders of anyone who does programming... or politics... or law... or urban planning... or economics...
The world would be very different if Lisp and Smalltalk hadn't existed, so it's not like cutting edge cool stuff is worthless just because it doesn't get adopted. I always like to see people encouraged to learn about these systems even if their "ecosystems" aren't "mature" for "web scale" or whatever. It's almost a matter of respect.
It might not be that we need to do one, the other, or a middle path. Instead, we can just focus through one of the mindsets on aspects of the problem. Remember that the projects are usually a collection of things. Each has to be solved independently with an integration. Then there's extensions. So, we can Right Thing key pieces of it even if not all of it. Further, as Gabriel noted, we can start with Worse is Better to get it moving then bring aspects into Right Thing status.
So, it's not all or nothing. My specialty, high assurance security, is Right Thing taken to the extreme for reliability or security. By the 1980's, they started kernelizing designs due to market forces making it impossible to apply to whole system. By the 1990's, they started doing it incrementally to increase uptake and financially justify the assurance work. Today, we've wisened up to try to concentrate our work on things that amplify overall correctness, reliability, or security. Type systems, compilers, kernels, protocols like Paxos (or Survivable Spread), model-based generation... anything where a little investment goes a long way.
Then we can smile knowing what people are using defiantly against our recommendations contains the results of seeds we planted earlier. Things got better even as they got worse. :) Just gotta figure out how to do that more often...
Interesting stuff. My story roughly speaking is that I came out of Chalmers all gung ho about dependent types and probably correct functional programming, and now I work on a JavaScript startup with not even many unit tests (which is partly my fault).
[I meant to say provably, but the autocorrect typo is illuminating.]
Generally I'm interested in logic in computer engineering, and I think the idea of safe kernels is great.
The Xmonad architecture seems to me like it might be a good model for coming architectures... Symbolically, it's nice and you can tell from just the name: X symbolizes UNIX and worse is better; monad stands for purity and correctness.
"My story roughly speaking is that I came out of Chalmers all gung ho about dependent types and probably correct functional programming, and now I work on a JavaScript startup with not even many unit tests (which is partly my fault)."
I tell people to focus on just what delivers the most value. I came up with a list of the few things that are empirically proven to benefit software quality:
So, for Javascript, you might do code reviews, use any static analyzers you know, decompose into functional style, have some interface checks encoding assumptions, and so on. Simple techniques that take little time, save you much time, and boost quality greatly. Plus, recall that you can do FP in many languages by subset and style. ;)
Note: This doesn't even count my old strategy of making a safe, macro-enabled 4GL that compiles to a target 3GL. You can code up shit in ML or Dependent language with constructs that can map 1-to-1 to JavaScript. Then, your tool produces Javascript from whatever you really code in. You deliver that w/out mentioning other tool. Best of both worlds.
I caught that: "I meant to say provably"
Not that: "but the autocorrect typo is illuminating" Lol nice catch. The autocorrect might have corrected an entire field's thinking rather than one person's spelling. You should contact the authors about the discovery of AI in their software. ;)
"Xmonad architecture"
I wasn't aware of it and I don't do functional programming yet. I'll have to add it to my list of things to check out.
Regarding provably-correct FP, what was your background, tools or projects? I might learn something or have something your interested in as I collect work in high assurance field. Have quite a few on that topic including some explorations but lack specialist expertise to really evaluate it.
Yeah, focusing on the highest value is great advice. Nice list!
Regarding FP, I think it's cool that there's so much activity and open source stuff going on, and from the community perspective it's a fresh angle for talking about correctness and reasoning and stuff.
Xmonad was a great community project for "teaching the virtues." Being a hacker's window manager, it was all about extensibility and configuration, kind of like this coral reef of experimentation. John Hughes used it as a big example when he gave talks on "real world" FP. The architecture is basically a pure core of compositional/algebraic/combinatoric stuff, with around 100% test coverage including lots of QuickCheck properties... surrounded by an interpreter layer that realizes this stuff into X11 commands.
So I'm really interested (not scholarly or academically or even professionally, just hobbyishly) in being able to express domains with actual logic. And since my interest's trajectory starts at Haskell and goes through Agda, I'm fascinated by the ability to unify proofs and programs (due to the Curry-Howard equivalence of typed lambda programs and constructive logic proofs).
I'm kind of waiting for more of that stuff to start growing in the open source / startup / GitHub / HN ecosystem. It ought to be very fruitful. With these new dependent type languages, proving becomes like hacking—you don't need to start as an academic logician, and you can bypass philosophical arguments about the definition of truth, because you just want to engineer a type-checking proof. So I think people could have fun with it. But there's a lot of alien notation and scary stuff...
Let me nerd out for a while since you seem interested, and I wanna spread the word about some things.
QuickCheck of course is a tool originally for Haskell that lets you very easily verify equational properties using type-directed random value generation. It's used in tons of real Haskell projects and it's super awesome. Recently I was writing a thing to synchronize Reddit comment state with a Git repository, and I used QuickCheck to verify that some JSON conversion things were isomorphic—this is a typical case where QuickCheck can instantly find tricky bugs and you barely have to write anything:
quickCheck (\x -> parse (render x) == x)
gives you a powerful test suite, even if the types have lots of nesting.
Then, a later project that's not as well known is QuickSpec.
It's kind of the inverse of QuickCheck: you would tell it to search for true equations involving parse and render, and it would discover the isomorphic property. It does limited exhaustive search on application trees, and randomized testing for verifying equations. So it can be a great starting point for reasoning about your modules, if they're written in a way that makes them amenable to algebraic reasoning.
It takes equations about Haskell functions, and then uses off-the-shelf automated theorem provers, combined with automatic lemmas from QuickSpec, to generate formal proofs automatically.
All this is made to work on actual Haskell code, and I think it points out a really cool path for getting hackers to bother with proofs and equations.
What I haven't seen so much yet, and this is one of my biggest curiosities / vague ambitions, is how to take different diverse domain models and extract an algebraic core.
Xmonad did it for window managing, which is super nice. I'd love to see more of that in different domains.
As I see it, that's what's going to make more people see a tangible value in otherwise abstract seeming stuff like equational reasoning.
This is already long enough, but a quick pointer to another eccentric interest I have, due to my brother who's great at finding value in seemingly obscure realms, and sticking with it even though people say he's crazy...
Interactive fiction is all about modelling objects and situations in a way that's comprehensible to humans, especially if you look at e.g. Inform 7, which uses English grammar as the basis for its declaration language, and has a very nice declarative modelling paradigm, quite novel stuff.
So that in itself is awesome, and I'd love to see some non-fiction programs written in Inform 7. It compiles to a virtual machine that as far as I know is decently fast and has I/O capabilities.
Then there's work by Chris Martens, which has been linked on HN at some point, on using linear logic as the basis for interactive fiction and general game prototyping.
> My thesis project is a programming language for the design of interactive narratives and game mechanics. The language is based on forward-chaining linear logic programming, a way of declaratively describing state change. This methodology makes it feasible to encode generative rules that create procedural content for interactive simulations that give rise to emergent narratives.
> The language semantics' basis in proof theory enables a structural understanding of these narratives, making it possible to analyze them for concurrent behavior among multiple agents. On a larger timescale, I imagine growing the technology underlying this language into a high-level sketching tool for game designers, usable for rapid prototyping and iteration.
>So, we're not going to resolve the question in favor of either #1 or #2. Those personas are like the two daemons on the shoulders of anyone who does programming... or politics... or law... or urban planning... or economics...<
Agreed...it's the fluid center, the moderate, the worker ant that actually moves things forward by taking the best the extremes--idealists, pragmatists--have to offer and making things work on a practical level...the "persona #3" you allude to...
I'm reminded of a quote from Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance...
"The test of the machine is the satisfaction it gives you. There isn't any other test. If the machine produces tranquility it's right. If it disturbs you it's wrong until either the machine or your mind is changed."
That tranquility...that satisfaction...that's it in a nutshell, whether the "machine" is simple or complex...the route to producing that machine doesn't much matter...the end result does...
Nice connection! I haven't read that book in like 10 years.
I think the worker bee can very well enjoy having a glass of wine with #1 and #2... dreaming about the astral planes of Platonic perfection, and complaining about the miserable present... and somewhere in that conversation, an idea is sparked, and a couple of weeks later, magic starts to happen!
So true. Good engineering is making the right tradeoffs, nothing is ever fully black or white. Whenever something is "pure", it usually sucks - not from a philosophical, but from a practical, real world standpoint.
Why is it that we tend to fall into the simplifying pureness trap? Is it the brain trying to reduce its energy consumption by not thinking too much?
I'd rather say that the "pure" is a source of inspiration... which can be good or bad.
For a kind of outlandish but strong example, look at how monotheism can inspire people to extreme struggles.
Or look at how the idea of a coherent and final constitution inspires American politics and the struggle for "a more perfect union."
And yet with all of these things, there's a lot of plumbing and compromise and evolution and war required, too.
Sometimes the pure ideal turns out to be, in hindsight, just a kind of arbitrary notion. Like if your whole ideology is based on God, and then people start to just not believe in that God at all. Then you get a renaissance, but also confusion.
So, like, I love Haskell, and I love to see it grow and develop, but I'm also aware that its strong ideals can from another perspective be considered false idols. Still, ideals are great for rallying around.
Some of what I'm trying to say here is articulated by Richard Rorty, through his concept of "ironism" in Contingency, Irony, and Solidarity, which I recommend!
This article is a really good discussion of Richard Gabriels essay, 'The Rise of Worse is Better', which should be, imo, required reading for all technologist (along with 'Mythical Man Month' and 'Soul of a New Machine'). The authors testimony resonated with me, once I realized that I'd never get adequate requirements, or time to fully implement a feature, and just solve the problem in front of me, is when I became happier in my career. Consequently more successful too.
This reminds me of a quotation from an interview with a
George W. Bush administration official (widely understood to be Karl Rove, though I'm not sure that's ever been substantiated) that made the rounds raising hackles and inspiring slogans [1] in the left-leaning blogosphere ca. 10 years ago. The emphasis here is more on study versus action here, but I think there's clearly some parallel:
> The aide said that guys like me were "in what we call the reality-based community," which he defined as people who "believe that solutions emerge from your judicious study of discernible reality." I nodded and murmured something about enlightenment principles and empiricism. He cut me off. "That's not the way the world really works anymore," he continued. "We're an empire now, and when we act, we create our own reality. And while you're studying that reality -- judiciously, as you will -- we'll act again, creating other new realities, which you can study too, and that's how things will sort out. We're history's actors . . . and you, all of you, will be left to just study what we do." [2]
[1] For a time, it was semi-common for left-leaning blogs to advertise themselves as part of "the reality-based community".
Worse-but-Better solutions make it to market dominance before the Right Thing has a chance to get off the ground; and then the Right Thing is no longer the right thing, because the reality has changed.
Taken uncharitably, of course, it sounds like puffed-up nonsense, which perhaps it is.
I just read your NYTimes article and I will consider it one of the most telling and interesting high-level analyses of the Bush Presidency.
There are a number of lessons to glean from it that go beyond the topic set by the OP.
This one about sleight-of-hand language resonated:
> I said, 'Mr. President, if we don't devote our energy, our focus and our time on also overcoming global poverty and desperation, we will lose not only the war on poverty, but we'll lose the war on terrorism."'
> Bush replied that that was why America needed the leadership of Wallis and other members of the clergy.
> "No, Mr. President," Wallis says he told Bush, "We need your leadership on this question, and all of us will then commit to support you. Unless we drain the swamp of injustice in which the mosquitoes of terrorism breed, we'll never defeat the threat of terrorism."
> Bush looked quizzically at the minister, Wallis recalls. They never spoke again after that.
I'm sure there are multiple constraints besides economic viability that could drive you to seek the 'right thing.'
I've noticed there are two types of developers: those that find fighting fires rewarding and enjoy the accolades that come with 'saving the day' and those that find fighting fires punishing, and that perceive requests to fix code in a hurry to be an intrusion.
I'm going to call them 'extrovert' and 'introvert' programmers.
I am an introvert programmer, and for me the 'right thing' has always meant code that I can put in production and forget about.
I also believe that introverts are in the minority, which means that when on a team introverts have to fight fires along with all the extroverts, even though their own code runs mostly without incident.
The argument that the Worse is Better solution is flawed imho is incorrect. It's far more perfect than The Right Way, its just a solution to a more perfect problem. Instead of increasing the complexity of the program decrease the complexities of the problem itself and you end up with a much more reasonable result.
Those interested, who have read RPG's original article, might want to learn more about his example of "the right thing", PC-lusering, by reading PCLSRing: Keeping Process State Modular by Alan Bawden (http://fare.tunes.org/tmp/emergent/pclsr.htm), and then perhaps learn more about ITS, which can still be run on a PDP-10 simulator: http://its.victor.se/wiki/
Those who don't want to go too far down that particular rabbit hole could do worse than read the first third of Steven Levy's Hackers, which is about the MIT AI Lab.
The reason ITS is rarely used now is more mundane than you might infer from RPG's articles: it was machine specific, being mostly written in MIDAS assembly language, and needs a PDP-10 to run it on, and DEC stopped making them. Unix is written mostly in C and is portable.
To pick on one paragraph, it's not that the left thinks "The Market" is evil. It's neither good nor evil (amoral, not immoral), so it's irrational to count on it to be a force for good in the world.
Amorality in an economic system is a good thing. It means that participants impute their values within the institutional framework. An economic system actually designed to be "moral" (I can't think of anything other than Marxism-Leninism, everything else is hypothetical cost-the-limit-of-price, anti-usury, Social Credit and mutual aid arrangements) would be not only inflexible, but such a morality would necessarily emanate from a top-down institution that is immoral itself.
But are the values of the participants really the goals of a free market economy and what it leads to? I think absolutely not - it's earning as much money as possible.
The things we actually see as good are just side effects. Free market systems work well, seem fair most of the time and have lots of positive side effects. However they also have negative side effects and I don't think participants alone can really avoid them. There are some cases of executives later on regretting that they couldn't act in another way because they had to keep shareholder value in mind.
That's why we have regulation in our actual economic systems. Of course views on what has to be regulated and how widely differ, but this is where our morality comes into the economic system (and all the other rules that act as a counter-measurement for the cases where free market economy isn't working).
Profit maximization is one goal among market participants, though not the only one, since the market also encompasses modes of organization embarked on for communal and personal reasons as well.
Profits are a motivation to act, but they may be nonpecuniary and psychic. Nor is "earning money" the ultimate goal in any sense, insofar as holding nominal money balances as a store of value becomes intractable with greater capacity. Now, subpar monetary and financial arrangements may distort savings-investment decisions, but this is not an intrinsic market deficiency.
Regretting your decisions ex ante is inherent to humanity, not to economic systems.
Regulation is and has emerged endogenously. Guilds, unions, standards agencies and other quality assurance bodies arise without state action.
I'm not clear how that would be different to the top-down economy and moral pretensions of bankers and investors that we have now.
When phrases like "moral hazard" are used to describe gambling risk, "amoral" is hardly the most apt description of a system that actually tries to define social and political morality for the entire world of work and business.
The reality is that mainstream economics has always been more a branch of moral philosophy than of empirical science. It's a tool of persuasion that tries to propagate its values through rhetoric and the use of economic, political and physical force.
That's quite close to the usual definition of a priesthood. It's only "amoral" in the sense that the ethics of the priesthood are quite alien to those of many adult humans.
This is what I'm saying, only if you think unfettered free marketeering is an unalloyed good would you propose that the antithetical position is that the market is evil.
If I say I want restrictions on the market so that our planet is still liveable in 2100, I am not saying the market is evil. I'm merely stating my moral (in that there is a value judgment) position in contrast to the "free" market moral position. If I say that unfettered markets lead to evil, I'm merely contending with that value judgment, not whether there should a market in general. There's an incredible amount of space between a rampant libertarian market and Communism. It's childish to pretend otherwise.
>I'm not clear how that would be different to the top-down economy and moral pretensions of bankers and investors that we have now.
To be fair, the economy is both so heavily regulated and so stagnated that it barely counts as a free market anymore.
Part of the problem is that nobody is willing to fight the "priests", as you put it. Why try to beat those massive banks? They have far too much money for you to ever defeat them. But having that much money and nobody to make them work for it is precisely why we need new banks to compete with the old ones.
I disagree. Amorality in an economic system encourages things like Martin Shkreli, and ignoring externalities like pollution, making life worse for everyone else for the benefit of a few.
"Making life worse for everyone else for the benefit of a few" is the inevitable result of any policy that places upper bounds on consumer preferences and transaction opportunities in a market. It is the imposition of a morality that leads to fragility, since the morality in question is almost invariably in favor of the statesman and the businessman.
I'm going to repeat the comment I made on the website here.
I think the history of technology is this. Grand visions that are publicly funded (Computers, The Internet, etc) have been since privatized and destroyed by the market.
I don't think you should compare Smalltalk vs Linux, but Alan Kay's Desktop GUI and Dynabook to today's UIs and the iPad.
It isn't a history of competing ideas that the market choose is correct. It is the history of grand ideas incubated in the public sector and then further distilled and misunderstood by the private sector.
The reality is the Market has little in making the decision between the grand idea and the distilled one. Consumers never had the opportunity to decide. Consumers were always given worse ideas to choose from to begin with.
Just like when you walk into a super market. You are given options that were decided for you. That were filtered through thousands of decisions made outside the market beforehand.
I think there can exist a perhaps rarer beast that values BOTH perfection and viability.
What this one does is keep his/her obsessive-compulsive eye on what is the perfect goal, knowing full well it may never be reached, then she turns to making the process itself perfect -- where a perfect process is defined as this: perfectly viable while maintaining a minimal distance to the perfect goal.
Different folks complete it in different ways. Some say that worse is better than complex. Others claim it's better than slow. Others still that it's better than incompatible. The point of the posted link is that worse is better than nothing. One can hardly argue with that.
> The point of the posted link is that worse is better than nothing.
No, that's inaccurate.
For instance, one example the article used to support WiB was that x86 (a CISC architecture) beat out RISC architectures. There were many RISC architectures: MIPS, SPARC, DEC Alpha, PA-RISC. So it's not a case of worse is better than nothing. x86 won against real competition, because it took advantage of evolutionary pressures.
The article contains several examples for the different interpretations. CISC vs. RISC is an example of "worse is better than incompatible". Unix vs. lisp-machine is an example of "worse is better than complex". But the bottom line, for this specific author, is that worse is better than nothing.
Nice essay. But framing the problem in terms of evolutionary vs. counter-evolutionary is not useful. Of course the evolutionary is going to win. It wins by definition. Evolution is the survival of the fittest. Being evolutionary means betting on the winner.
He beat me to it as I'm mentally working on a similar essay. Backward compatibility and shipping pressure I already covered a lot in my posts on Schneier's blog elaborating on this. See Steve Lipner's Ethics of Perfection essay for a great take on "ship first, fix later" mentality. He had previously done a high-assurance, secure VMM. So, he had been on both sides.
On backward compatibility, you need to explore lock-in and network effects. These are the strongest drivers of the revenues of the biggest tech firms. Once you get the market with shipping, people will start building on top of and around your solution. They get stuck with it after they do that enough to make it hard to move. Familiarity with language or platform matters here, too. The economics become more monopolistic where you determine just enough additions to keep them from moving.
I agree with a commenter there that it needs a OpenVMS tie-in: a great example of Right Thing vs Worse is Better that won in market. While their management was good. ;) It had better security architecture, individual servers went years without reboot, mainframe-like features (eg batch & transactions), cross-language development of apps, clustering in 1980's, more English-like command language, management tech, something like email... the whole kitchen sink all integrated & consistent. Reason was it was a company of engineers making what they themselves would like to use then selling it to others to sustain it. Also mandated quality where they'd develop for a week, run tests over weekend, fix problems for a week, and repeat. That's why sysadmins forgot how to reboot them sometimes. ;)
Here's a few others that fall under Cathedral and Right Thing model that got great results with vastly fewer people than Worse is Better and/or were successful in the market. Burroughs and System/38 still exist as Unisys MCP and IBM i respectively. Lilith/Oberon tradition of safe, easy-to-analyze, and still fast lives on in Go language designed to recreate it. There's nothing like Genera anymore but Franz Allegro CL still has a consistent, do-about-anything experience. QNX deserves mention since it's a Cathedral counter to UNIX where they implemented POSIX OS with real-time predictability, fault-isolation via microkernel, self-healing capabilities, and still very fast. Still sold commercially and was how Blackberry Playbook smashed iPad in comparisons I saw. They once put a whole desktop (w/ GUI & browser) on a floppy with it. Throw in BeOS demo showing what its great concurrency architecture could do for desktops. Remember this was mid-1990's, mentally compare to your Win95 (or Linux lol) experience, and let your jaw drop. Mac OS X, due to Nextstep, could probably be called a Cathedral or Right Thing that made it in market, too.
So, more food for thought. The thing the long-term winners had in common is that (a) they grabbed a market, (b) they held it long enough for legacy code/user-base to build, (c) incrementally added what people wanted, and (d) stick around due to legacy effect from there. Seems to be the only proven model. It can be The Right Thing or Worse is Better so long as it has those components. So, we Right Thing lovers can continue to trying to make the world look more Right. :)
1. The world is tainted. Everyone is doing it wrong. I'm thinking about a great way to solve things. You people should stop doing the bad thing. Come on, everyone. Let's do things right. Why is everything so bad? Why is life unfair? Why is our legacy so messy and complex? Let's start over. Let's set things straight. Justice, peace, correctness, truth, beauty. If we just think very hard, and come to a consensus, then we can implement something much, much better.
2. The world is exactly the way it is. I'm not exactly sure why. Historians are working on it. The institutions that we exist within distribute power in a certain way. Matter is heavy and resistant to change. I don't really know how to "improve" the world, and if I tried, I may just make it worse. Why are you talking about abstract nonsense like "justice" and "truth"? This is just how it is. This is what we have to work with. We are lucky if we can make a few incremental improvements.
As in so many cases with two extremes that end up fighting unproductively, the vast middle ground is where the interesting stuff happens.
David Chapman... incidentally a contributor to the UNIX-HATERS handbook, decades ago... has a terminology with which I think my persona #1 would be called an eternalist and persona #2 a nihilist, roughly speaking.
And he wants to sketch out a persona #3, who operates under what he calls the fluid mode. That's someone who understands the viewpoints of both eternalism and nihilism, recognizes them both as incomplete, and then valiantly works to create things and change the world in a kind of liberated way.
So, we're not going to resolve the question in favor of either #1 or #2. Those personas are like the two daemons on the shoulders of anyone who does programming... or politics... or law... or urban planning... or economics...
The world would be very different if Lisp and Smalltalk hadn't existed, so it's not like cutting edge cool stuff is worthless just because it doesn't get adopted. I always like to see people encouraged to learn about these systems even if their "ecosystems" aren't "mature" for "web scale" or whatever. It's almost a matter of respect.