
A Generation Lost in the Bazaar (2012) - akkartik
http://queue.acm.org/detail.cfm?id=2349257
======
danblick
Slightly OT, but I enjoyed the discussion about (physical building)
construction in the book "the Checklist Manifesto". In The Mythical Man Month,
Brooks advocates the use of something like the "master builder" model for
software. It turns out that in actual (physical building) construction, the
master builder model is no longer used, because the construction process has
become much too complex to be understood by a single person.

Instead, in construction, they use a system of checks to ensure that different
experts consult one another so that every decision is reviewed by a relevant
expert.

I suspect that the "chief architect" approach that Brooks advocates may have
become obsolete as well since the Mythical Man Month was written. Perhaps
software developers could learn something from the newer methods that replaced
the "master builder" model in construction.

~~~
DenisM
David Brooks himself wrote as much in 1986:

 _I still remember the jolt I felt in 1958 when I first heard a friend talk
about building a program, as opposed to writing one. In a flash he broadened
my whole view of the software process. The metaphor shift was powerful, and
accurate. Today we understand how like other building processes the
construction of software is, and we freely use other elements of the metaphor,
such as specifications, assembly of components, and scaffolding.

The building metaphor has outlived its usefulness. It is time to change again.
If, as I believe, the conceptual structures we construct today are too
complicated to be specified accurately in advance, and too complex to be built
faultlessly, then we must take a radically different approach.

Let us turn nature and study complexity in living things, instead of just the
dead works of man. Here we find constructs whose complexities thrill us with
awe. The brain alone is intricate beyond mapping, powerful beyond imitation,
rich in diversity, self-protecting, and selfrenewing. The secret is that it is
grown, not built._

~~~
erelde
That quote is beautiful but it reminds me a lot of tales of hubris, although I
do not believe in any god(s), I always felt some truth in that mythical sin.

Can we grow designed software? Or, can we design truly growing software? I
guess it depends on the definition of growing you use, "scalable" ? "alive" ?

~~~
adrianN
Refactor, refactor, refactor. Growing things adapt to changing circumstances.
Software must adapt to changing requirements. Refactoring is the only way.

~~~
JanezStupar
This is so important. We all know beautifully "designed" API's or libraries.

The secret is that the "elegant" API design you see is usually the n-th
iteration.

Growing code is much like growing trees. The tree grows itself, what it
requires is that it is pruned and helped to reshape in a fashion that will
allow it to stay alive for a long time and bear fruit.

But one needs to constantly prune the damn thing.

~~~
user0394890238
I like the tree metaphor.

You start planting trees, maybe it's in an empty field, or maybe it's in an
old forest. At some point it takes root and multiplies. People help plant your
forest in unexpected places and it expands. At some point you try pruning and
controlling the trees. And at some point a forest fire destroys it making way
for a new forest to grow.

------
dsr_
Cathedrals are beautiful. They represent the vision of one person or a very
small group made real by the hard work of hundreds or thousands of skilled and
semi-skilled workers over dozens of years.

Cathedrals are not, generally speaking, profitable. They represent the
expenditure of lots of capital over a long period of time.

Bazaars don't cost much to start. You can start quite small and have a
functioning system that does useful things for people. They can grow quite
large, and when they grow too large it becomes difficult to find exactly what
you want without a really good map. But you can probably quickly find a bunch
of things that are more or less close to what you want.

Cathedrals are not easy or cheap to repair, but the investment is so large
that people usually prefer to repair them. A bazaar that doesn't work out
makes some local people sad, but they will go to another bazaar that is a
little less convenient for them, and perhaps do better there.

It's nice to have some cathedrals, because they feed the soul. But you need to
eat every day, so there will always be bazaars, and if you need to make a
choice, the bazaar is going to win unless you have a lot of resources stored
up to fall back on.

~~~
Clubber
A few points just for fun:

1\. Christianity, the group who builds Cathedrals, is probably the most
profitable organization in the history of mankind.

2\. Bazaars blow away when the wind picks up. If too many people show up,
things start falling down. Cathedrals, and their close cousins, castles, last
centuries.

3\. I think Bazaars have their place, when you really need to ramp up
something to show. We used to call that prototyping. If it gets past that, you
gotta build it right eventually.

~~~
hexane360
2\. Bazaars pop back up seconds after they blow away.
[http://youtu.be/MENjFkEAj9g](http://youtu.be/MENjFkEAj9g)

~~~
taneq
Yeah, but nothing's quite where you left it, your favourite curry stall is
gone for good, and you'll never be able to get a refund on that thing you
bought that stopped working the day before the storm.

------
vessenes
And yet..I think this critique gets less strong as time goes on.

The amount of productivity available to Mr. Kamp for free today is
conservatively double or triple that available in 1999. Databases, web
frameworks, scale know-how, IDEs, hosting platforms, the list goes on.

He harkens back, sadly, to an era in which codebases like Genuity Black Rocket
cost $100k in licensing, and ran on $30k/month Sun servers. Seriously.

Languages are faster, development times are shorter, and chips are WAY faster.
And, code can be pushed out for tinkering and innovation onto github for free.
Combine that with his estimate that we have 100x more people in computing, and
the combination is a riot of creativity, crap, fascinating tech and everything
in between.

The bazaar is messy, but I'm not aware of any solid critiques which show
cathedrals are more efficient at the multiples-of-efficiency kind of gains we
get from legions of self-interested, self-motivated coders.

~~~
_bpo
> Languages are faster, development times are shorter, and chips are WAY
> faster.

This is due to Moore's law, not the software design choices that the article
bemoans. Those $30k/month Sun servers were many times faster and cheaper than
the earlier machines they replaced as well.

~~~
vessenes
While Moore's law helps, languages are more expressive, safer, more performant
and have more batteries included yielding a whole bunch of improvements.

We've had software and hardware gains, massive ones, and they compound.

~~~
SolarNet
> While Moore's law helps, languages are more expressive, safer, more
> performant and have more batteries included yielding a whole bunch of
> improvements.

I have to disagree, compilers may have gotten a bit better at making faster
binaries. But languages, like new languages, are increasing in expressiveness
and safety, sure, but very rarely efficiency. Go and Rust are not faster than
C or C++, likely never will be (for one C has decades of lead time), Go and
Rust may be faster than C was 20 years ago, but that doesn't matter.

~~~
steveklabnik
If Rust is significantly slower than equivalent C or C++, it's a bug. Please
file them.

(And yes, sometimes, it's faster. Today. Not always! Usually they're the same
speed.)

~~~
SolarNet
My point is more like this chart [0]. C has so much lead time, Rust will
probably never be able to catch up, be close? Sure. But C has decades of lead
time.

[0]
[http://www.viva64.com/media/images/content/b/0324_Criticizin...](http://www.viva64.com/media/images/content/b/0324_Criticizing-
Rust-Language-Why-Cpp-Will-Never-Die/image1.png)

~~~
steveklabnik
That chart is extremely old. We are sometimes faster than C in the benchmark
games, with the exception of SIMD stuff due to it not being stable yet. (and,
it can fluctuate, depending on the specific compiler version, of course.)

For example, here's a screenshot I took a few months ago:
[http://imgur.com/a/Of6XF](http://imgur.com/a/Of6XF)

or today: [http://imgur.com/a/U4Xsi](http://imgur.com/a/U4Xsi)

Here's the link for the actual programs:
[http://benchmarksgame.alioth.debian.org/u64q/rust.html](http://benchmarksgame.alioth.debian.org/u64q/rust.html)

Today, we're faster in C than one program, very close in most, and behind
where SIMD matters.

    
    
      > But C has decades of lead time.
    

Remember, Rust uses LLVM as a backend, which it shares with Clang. So all that
work that's gone into codegen for making C programs fast also applies to Rust,
and all of the work Apple and whomever else is working to improve it further,
Rust gets for free.

~~~
SolarNet
I mean true, I'm playing devil's advocate here. I respect the Rust community
(heck of all the nu-c languages I respect it the most, I even did a poster on
0.1 of it for my programming languages class), I will be quite impressed if
they can pull off (and they are the most likely to be capable of it in my
opinion) what so far has been an insurmountable task: beat an old guard
language in general purpose performance (Fortran, C, etc.); languages that
have every advantage but design foresight. If they do it, it will be a great
historical case study on how to build a new programming language.

As an aside: As someone who has used LLVM to build a compiler, it doesn't
quite work that way, yes rust has access to those gains, but it may not be
able to effectively use them (due to differing assumptions and strategies).

~~~
steveklabnik
Totally hear what you're saying on all counts :)

------
RachelF
Having seen the source for a non-Bazaar OS (Windows), I can say that they are
not built like cathedrals:

Most of the original developers have long since moved on, there are design
problems, various teams and managers rebuild or duplicate work, and management
sometimes imposes big changes just before release.

Software quality is hard to judge from the outside, and takes longer to build.

~~~
dfox
Windows (at least for userspace components) has significantly less self-
contained modules (which thus contain larger amounts of functionality) than
any modern unix-based system and there are almost no modules that are widely
used and not supplied by Microsoft (excluding stuff that is ported from unix
and various third-party hardware APIs). One of reasons for that is probably
that before .NET, there was essentially no support in VS for building
applications as anything other than big self-contained .EXEs and the current
solution/project mechanism leaves much to be desired.

~~~
asveikau
What the hell are you talking about?

Just take your Unix mentality and make a few substitutions:

* gcc -> cl.exe

* ar -> lib.exe

* ld -> link.exe

* make -> nmake.exe

* libfoo.so -> foo.dll

And there you have it, the world that "didn't exist before .NET" ... This is
crazy amounts of ironic because Windows had DLLs at a time when shared
libraries were not so much a thing on Unix. Not to mention things like COM
which are all about creating de-coupled components.

~~~
marcus_holmes
I still miss COM. It had its problems, sure, but it worked really well.

I haven't seen anything since that allowed such decoupled development

~~~
mike_hearn
I think you have rose tinted classes on.

COM is/was a rats nest of confusing and frequently duplicated APIs with
insanely complicated rules that by the end really only Don Box understood.
CoMarshalInterThreadInterfaceInStream was one of the simpler ones, iirc. COM
attempted to abstract object language, location, thread safety, types, and
then the layers on top tried to add serialisation and documented embedding
too, except that the separation wasn't really clean because document embedding
had come first.

Even just implementing IUnknown was riddled with sharp edges and the total
lack of any kind of tooling meant people frequently screwed it up:

[https://blogs.msdn.microsoft.com/oldnewthing/20040326-00/?p=...](https://blogs.msdn.microsoft.com/oldnewthing/20040326-00/?p=40033)

The modern equivalent of COM is the JVM and it works wildly better, even if
you look at the messy neglected bits (like serialisation and RPC).

~~~
asveikau
I think the good ideas from COM are: IUnknown, consistent error handling
through HRESULT, the coding style that emerges from being clear about method
inputs and outputs.

Some things done not as well as these core ideas: registration done globally
in the registry, anything to do with threading, serialization, IDispatch.

I think in many situations you can take lessons from the good parts and try to
avoid the bad.

I don't see how pointing out common bugs helps your argument though. You can
write bugs in any paradigm.

~~~
mike_hearn
Yes you can write bugs in any paradigm, but some designs are just empirically
worse than others when it comes to helping people write correct code.

IUnknown is a classic case of something that _looks_ simple but in fact a
correct implementation is not at all trivial, yet COM developers were expected
to get it right by hand again and again. COM itself didn't help with it at
all, so the ecosystem was very dependent on IDE generated code and
(eventually) ATL and other standard libraries.

None of the things you highlight were good ideas, in my view, although
probably the best you can do in C.

------
phkamp
(Author of the essay here)

It is discussions like this which make me truly admire Douglas Adams for his
insights and ability to express them.

For instance, when I read through the debate here, I can't help notice how
many of the arguments are really variations of "It's a bypass! You've got to
build bypasses! Not really any alternative."

~~~
isuckatcoding
For me the highlight was:

'That is the sorry reality of the bazaar Raymond praised in his book: a pile
of old festering hacks, endlessly copied and pasted by a clueless generation
of IT "professionals" who wouldn't recognize sound IT architecture if you hit
them over the head with it.'

I was hoping for some kind of expansion or attempt at a solution here (which
of course would be non-trivial).

~~~
phkamp
The (only) solution is for people to care about quality in computing.

There's still too much money to be made on kludges for that to happen.

~~~
pjc50
.. and no money to be made on quality.

It's Akerlof's "Market For Lemons" writ large. Users can't assess the quality
of software before they buy it and sink ages of their own time into learning
it. Often users can't assess quality problems even _after_ they've bought it.
So the market isn't going to reward quality.

(The original paper was about cars; now we have software in our cars the
problem is twice as bad. VW 'defeat devices' and Toyota 'unintended
acceleration' passim).

------
amasad
I've been thinking a lot about this in the context of software development
tools. It is now just expected that IDEs, compilers, tooling etc are free and
OSS. On the one hand this enables bottom-up innovation and shorter development
cycles. On the other hand setting up a development environment is a royal pain
in the butt. And a big turnoff for newbies -- they begin to think that
programming is some sort of IT job about installing and troubleshooting
software. Even when you manage to set everything up there is constant
maintenance cost as you update software and as new things come out.

At the very least, I would love to see companies created around popular open
source tools and verticals to create designed end-to-end experiences.
Download, double-click, start coding, and see something on the screen.

~~~
douche
There are non-terrible development stacks. Some of them are even free.
Unfortunately too much of it is endless turtles-all-the-way-down yak shaving
marathons, like in the JS world. I forget whether I'm supposed to grunt, gulp,
babble, or barf.

You just have to get out of the churn-for-churn's-sake cesspools. There are
high-quality, stable software stacks out there, where the Cambrian explosions
and rediscovery of ideas from a generation ago have already passed.

~~~
qwertyuiop924
>Unfortunately too much of it is endless turtles-all-the-way-down yak shaving
marathons, like in the JS world. I forget whether I'm supposed to grunt, gulp,
babble, or barf.

I don't use any of 'em. If you can afford to give the finger to those not
running in a near-POSIX environment, you can just use makefiles or npm
scripts: write your code, and run shell scripts to build it, the way God,
Doug, Dennis, Brian, and Ken intended.

As for good dev environments, I will not leave my beloved emacs (C-x C-e in
geiser-mode means that you can run your software as you write it, and I love
it: Most dynamic languages have something similar), but that would intimidate
newbies. Gedit and a shell is probably the best environment to start them
with: It's about as simple as you get, and every developer is expected to have
a rudimentary knowledge of shell, so best to start early.

------
haddr
It's a very nice read with many good points, but any person with some
experience in IT projects could argue with it. The author is taking one side
without any self-criticism.

It is true that configure scripts are probably doing some useless things,
"31,085 lines of configure for libtool still check if <sys/stat.h> and
<stdlib.h> exist, even though the Unixen, which lacked them, had neither
sufficient memory to execute libtool nor disks big enough for its 16-MB source
code", etc. But then what is the alternative? Writing a configure module each
time by every programmer who wants to release some software? This is called
code reuse and yes, it's not perfect but it saves time. By not reinventing the
wheel again and again. By reusing something that is stable and has been there
for some time. Probably such thing is generalizing over many architectures,
and making useless things, but then again, who cares for some extra 5-10
seconds of the "configure" command, when you are covered for all those strange
corner cases that it already handles?

~~~
x0x0
Insane complexity in build systems just makes your life sad. Trust me, I know
-- I write java for a living. We have ivy, maven, ant, gradle, sbt, leiningen,
and I'm sure a few more.

~~~
d_burfoot
Hey, I write Java for a living and my build script is just a Python program
that marshals a javac call.

~~~
qwertyuiop924
I write scheme not for a living. My build script is a scheme file or a
makefile, depending on the project.

The build complexity is one of the reasons I stay away from Java: If these
people think they need XML for builds, what other needlessly complex horrors
have they perpetrated? And that sort of thing.

Do you have your buildscript template on github?

~~~
quantumhobbit
I'm beginning to think that build scripts should be written in the language
they are building, or a more expressive language. So many seem to go in the
other direction.

~~~
ashitlerferad
I disagree. We need declarative build systems instead so building is auditable
and doesn't require arbitrary code execution.

~~~
qwertyuiop924
You actually NEED arbitrary code execution during builds. Allow me to
explain...

Let's say we have a make format called dmake. It invokes $CC with the
specified arguments for each file, and links them together into a
binary/so/whatever, putting it into the build directory and cleaning
artifacts. Okay.

Now say that you start a new project in rust. Well, crap, dmake doesn't work.
You have to use rdmake, which is built by different people, and uses a more
elegant syntax - which you don't know.

Then you write Haskell, and have to use hdmake - which of course is written as
a haskell program, using a fancy monad you don't know, and python has to use
pydmake, and ruby has to use rbdmake, and scheme has to use sdmake, and lisp
has to use ldmake, and asm has to use 60 different dmakes, depending on which
asm you're using.

Instead, we all use make. Make allows for arbitrary code to be executed, so no
matter what programming environment you use, you can use a familiar build tool
that everybody knows. Sure, java has Ant, Jelly, Gradle and god knows what
else, and node has $NODE_BUILD_SYSTEM_OF_THE_WEEK, but even there, you can
still use make.

That's the power of generic tools.

~~~
nickpsecurity
You haven't countered parent's point at all. You could've just as easily said
the common, subset of SQL could be implemented extremely different in SQL
Server, Oracle, Postgres, etc. Therefore, declarative SQL has no advantages
over imperative, C API's for database engines. Funny stuff.

Let's try it then. The declarative, build system has a formal spec with types,
files, modules, ways of describing their connections, platform-specific
definitions, and so on. Enough to cover whatever systems while also being
decidable during analysis. There's also a defined ordering of operations on
these things kind of like how Prolog has unification or old expert systems had
RETE. This spec could even be implemented in a reference implementation in a
high-level language & test suite. Then, each implementation you mention, from
rdmake to hdmake, is coded and tested against that specification for
functional equivalence. We now have a simple DSL for builds that checks them
for many errors and automagically handles them on any platform. Might even
include versioning with rollback in case anything breaks due to inevitable
problems. A higher-assurance version of something like this:

[https://nixos.org/nixos/about.html](https://nixos.org/nixos/about.html)

Instead, we all use make. Make allows for arbitrary code and configurations to
be executed, so no matter what configuration problems you have, we can all use
a familiar build tool that everybody knows. That's the power of generic,
unsafe tools following Worse is Better approach. Gives us great threads like
this. :)

~~~
qwertyuiop924
From the perspective of security, make is not great, but there's always a more
complicated build, requiring either generic tooling, or very complex specific
tooling. This is why the JS ecosystem is always re-inventing the wheel. If you
design your build tool around one abstraction, there will always be something
that doesn't fit. What will happen if we build a tool akin to the one I
described is that it will grow feature upon feature, until it's a nightmarish
mess that nobody completely understands.

>You could've just as easily said the common, subset of SQL could be
implemented extremely different in SQL Server, Oracle, Postgres, etc.
Therefore, declarative SQL has no advantages over imperative, C API's for
database engines. Funny stuff.

No, that's not my point, my point is that a build tool that meets parent's
requirements would necessarily be non-generic, and that such a tool would
suffer as a result.

>Instead, we all use make. Make allows for arbitrary code and configurations
to be executed, so no matter what configuration problems you have, we can all
use a familiar build tool that everybody knows. That's the power of generic,
unsafe tools following Worse is Better approach. Gives us great threads like
this. :)

Worse is Better has nothing to do with this. Really. Make is very Worse is
Better in its implementation, but the idea of generic vs. non-generic build
systems, which is what we're discussing, is entirely orthogonal to Worse is
Better. If you disagree, I'd reccomend rereading Gabriel's paper (that being
_Lisp, The Good News, The Bad News, And How to Win Big_ , for the
uninitiated). I'll never say that I'm 100% sure that I'm right, but I just
reread it, and I'm pretty sure.

~~~
nickpsecurity
"No, that's not my point, my point is that a build tool that meets parent's
requirements would necessarily be non-generic, and that such a tool would
suffer as a result."

A build system is essentially supposed to take a list of things, check
dependencies, do any platform-specific substitutions, build them in a certain
order with specific tools, and output the result. Declarative languages handle
more complicated things than that. Here's some examples:

[https://cs.nyu.edu/~soule/DQE_pt1.pdf](https://cs.nyu.edu/~soule/DQE_pt1.pdf)

I also already listed one (Nix) that handles a Linux distro. So, it's not
theory so much as how much more remains to be solved/improved and if methods
like in the link can cover it. What specific problems building applications do
you think an imperative approach can handle that something like Nix or stuff
in PDF can't?

~~~
qwertyuiop924
...Nix actually uses SHELL for builds. Just like make. It's fully generic.

[http://nixos.org/nix/manual/#sec-build-
script](http://nixos.org/nix/manual/#sec-build-script)

~~~
nickpsecurity
Didn't know that. Interesting. It looks like an execution detail. Something
you could do with any imperative function but why not use what's there for
this simple action. Nix also manages the executions of those to integrate it
with their overall approach. Makes practical sense.

"It's fully generic."

It might help if you define what you mean by "generic." You keep using that
word. I believe declarative models handle... generic... builds given you can
describe about any of them with suitable language. I think imperative models
also handle them. To me, it's irrelevant: issue being declarative has benefits
& can work to replace existing build systems.

So, what's your definition of generic here? Why do declarative models not have
it in this domain? And what else do declarative models w/ imperative
plugins/IO-functions not have for building apps that full, imperative model
(incl make) does better? Get to specific objections so I can decide whether to
drop declarative model for build systems or find answers/improvements to
stated deficiencies.

~~~
qwertyuiop924
That wasn't what the original post by ashitlerferad was calling for. I have
not problem with generic declararive-model build systems that can be used for
anything. However, the original call was for build systems which don't require
_arbitrary code execution._ A generic build system must deal with many
different tools and compilers, and thus REQUIRES arbitrary code execution:
Somewhere, there's got to be a piece of code telling the system how to build
each file. And if you don't build that into the build system proper, you wind
up either integrating everything into core, or adding an unweildly plugin
architecture and winding up like grunt/gulp and all the other node build
systems. Or you could just allow for arbitrary code execution, and dodge the
problem all together. This is possible in a declaritive system, but it's a lot
harder to do, and means at least part of your system must be declarative.

~~~
nickpsecurity
It seems some kind of arbitrary execution is necessary. I decided to come back
to the problem out of curiosity to see if I could push that toward declarative
or logic to gain its benefits. This isn't another argument so to speak so much
as a brainstorm pushing envelope here. Could speculate all day but came up
with a cheat: it would be true if anyone had replaced make or other
imperative/arbitrary pieces with Prolog/HOL equivalents. Vast majority of
effort outside I/O calls & runtime itself would be declarative. Found these:

[http://www.cs.vu.nl//~kielmann/papers/THD-
SP-1991-04.pdf](http://www.cs.vu.nl//~kielmann/papers/THD-SP-1991-04.pdf)

[https://github.com/cmungall/plmake](https://github.com/cmungall/plmake)

Add to that Myreen et al's work extracting provers, machine code and hardware
from HOL specs + FLINT team doing formal verification of OS-stuff (incl
interrupts & I/O) + seL4/Verisoft doing kernels/OS's to find declarative,
logic part could go from Nix-style tool down to logic-style make down to
reactive kernel, drivers, machine code, and CPU itself. Only thing doing
arbitrary execution, as opposed to arbitrary specs/logic, in such a model is
what runs first tool extracting the CPU handed off to fab (ignoring non-
digital components or PCB). Everything else done in logic with checks done
automatically, configs/actions/code generated deterministically from
declarative input, and final values extracted to checked
data/code/transistors.

Hows that? Am I getting closer to replacing arbitrary make's? ;)

~~~
qwertyuiop924
...I'm not sure I totally understand. Here's how I'd solve the problem:

Each filetype is accepted by a program. That program is what we'll want to use
to compile or otherwise munge that file. So, in a file somewhere in the build,
we put:

    
    
      *.c:$CC %f %a:-Wall
      *.o:$CC %f %a:-Wall
    

And so on. The first field is a glob to match on filetype,%f is filename, %a
is args, and the third field is default args, added to every call.

The actual DMakefile looks like this:

    
    
      foo:foo.c:-o foo
      bar.o:bar.c:-c
      baz.o:baz.c:-c
      quux:bar.o baz.o:-o quux
      all:foo quux
    

Target all is run if no target is specified. The first field is the target
name. The second field is list of files/targets of the same type, to be
provided to compiler on run. It is assumed the target and its resultant file
have the same name. The last field is a list of additional args to pass to the
compiler.

This is something I came up with on the spot, there are certainly holes in it,
but something like that could declaritivise the build process. However, this
doesn't cover things like cleaning the build environment. Although this could
be achieved by removing the resultant files of all targets, which could be
determined automatically...

~~~
nickpsecurity
There you go! Nice thought experiment. Looks straight-forward. Also,
individual pieces far as parsing could be auto-generated.

Far as what I was doing, I was just showing they'd done logical, correct-by-
construction, generated code for everything in the stack up to OS plus someone
had a Prolog make. That meant about the whole thing could be done
declaratively and/or c-by-c with result extracted with basically no
handwritten or arbitrary code. That's the theory based on worked examples. A
clean, integration obviously doesn't exist. The Prolog make looked relatively
easy, though. Mercury language make it even easier/safer.

------
csours
Recurrent calamities, fires and earthquakes hit the Grand Bazaar. The first
fire occurred in 1515; another in 1548.[9] Other fires ravaged the complex in
1588, 1618 (when the Bit Pazari was destroyed), 1645, 1652, 1658, 1660 (on
that occasion the whole city was devastated), 1687, 1688 (great damage
occurred to the Uzun Carsi) 1695, 1701.[14] The fire of 1701 was particularly
fierce, forcing in 1730-31 Grand Vizier Nevşehirli Damad Ibrahim Pasha to
rebuild several parts of the complex. In 1738 the Kizlar Aĝasi Beşir Ağa
endowed the Fountain (still existing) near Mercan Kapi.

[https://en.wikipedia.org/wiki/Grand_Bazaar,_Istanbul#History](https://en.wikipedia.org/wiki/Grand_Bazaar,_Istanbul#History)

~~~
spitfire
That's a good idea, maybe the Baazar needs a good fire. Fires help forests
stay healthy returning a lot of biomass for new healthy forests to grow.

Maybe the ecosystem needs a good fire to clear out the diseased old growth?

~~~
kardos
> Maybe the ecosystem needs a good fire to clear out the diseased old growth?

Ehhh, the analogy works better in terms of competing species. If a stronger
invasive species rolls in and takes over due to raw excellence, we can see the
eviction of the old cruft. For example, once the dust is finally settled in a
few more years, we may come to see systemd as such a force.

------
jesstaa
"the ruins of the beautiful cathedral of Unix, deservedly famous for its
simplicity of design, its economy of features, and its elegance of execution"

unix is the bazaar enabler. An OS that is so simple as to be mostly useless
without third party additions but also without enough of an overall design to
give the third parties much of a guide on how things should fit together. The
mess that unix enabled is also the reason we can't get away from it and move
on to better architectured systems.

------
lifeisstillgood
So a while back I had a JIRA ticket, to go add a feature to a pile of JS front
end. I did not understand the pile very well, but that's OK, neither did
anyone else, and I expected to add a large amount of tests and code to get the
job done.

But I like understanding the code, so I read and read and puzzled and fended
off "hurry up" comments.

And then I added two lines and it basically worked.

I am in favour of a slow code movement
([http://www.mikadosoftware.com/articles/slowcodemovement](http://www.mikadosoftware.com/articles/slowcodemovement))
which seems relevant here.

Even better was a guy I worked with, physics PHd from central China, whose
daily stand up reports mostly consisted of "Inspent yesterday reading around
the subject, I shall spend today, reading around the subject."

Love him, love the brass balls.

------
njharman
For any given software thing you can

1) pay for the cathedral

2) use the bazaar, possibly needing to seed it

3) do without

Since 1) is typically very expensive and lacks guarantees of suitability to
purpose and continued existence, the ROI is so massively negative few can even
try it. 3) often isn't an option.

the bazaar is an inevitability. No one has been mandating it for past decades.
No laws enforce it. It occurs cause it is the only available/possible option.

------
asragab
I love how this article makes its reappearance on HN every two years...

[https://news.ycombinator.com/item?id=4407188](https://news.ycombinator.com/item?id=4407188)
[https://news.ycombinator.com/item?id=8812724](https://news.ycombinator.com/item?id=8812724)

~~~
Annatar
And judging by how much garbage software there is out there, written by people
without a clue who think they have one, it needs to reappear a hell of a lot
more!

------
peterwwillis
He's complaining that the Bazaar is not efficient, and that it's a pile of
hacks.

No. Shit.

Yes, it's an inefficient pile of garbage built one heap on top of another.

But it's also _incredibly useful_.

    
    
      Modularity and code reuse are, of course, A Good Thing. Even in the most trivially
      simple case, however, the CS/IT dogma of code reuse is totally foreign in the
      bazaar: the software in the FreeBSD ports collection contains at least 1,342 copied
      and pasted cryptographic algorithms.
      
      If that resistance/ignorance of code reuse had resulted in self-contained and independent
      packages of software, the price of the code duplication might actually have been a
      good tradeoff for ease of package management. But that was not the case: the packages
      form a tangled web of haphazard dependencies that results in much code duplication
      and waste.
    

Code duplication, waste, and a tangled web of haphazard dependencies are only
a bad thing _in theory_. PHK talks about "CS/IT dogma" like it's the rule
rather than the exception. But the CS/IT dogma, like most other dogma, always
runs into exceptions when put into practice.

When you have many, many independent teams of people developing software, you
may want to implement something someone else has already done. As a member of
your team, you have two choices: 1) re-implement the original as part of your
own cathedral, or 2) put some glue around the existing solution.

Sometimes, 1) will be the best choice. But sometimes - and this is usually the
case with Unix tools - 2) works _just as well_ , and you get the benefit of
only having to write and maintain glue, rather than a constantly morphing
feature implementation. If the original was done well, and your glue supports
it well, you get a feature without paying for it.

The foundation of Unix is a strong Cathedral. But this foundation is what
makes the Bazaar fit so well around it: it provides something strong to glue
other shit to. As long as you have someone to continue gluing shit together,
you can keep adding pieces ad infinitum, and what you lose in the end is
essentially disk space and compilation time. I'm willing to accept that.

------
BadassFractal
It's why I use OSX at home for anything non-work related, I want an integrated
experience that just works, I don't want to be IT guy at home. No messing
around with video card drivers, no figuring out why my printer chops off
pieces of the page, no constant wifi or bluetooth crapouts, support for Adobe
suite or popular digital audio workstation software.

At work I'm happy to use Linux, but I use it for things it's great at, such as
simulating our prod environment, coding, etc, but at home I just don't want to
think after a long ass day, I want to enjoy computing.

~~~
linuxhansl
Printer support (as an example) worked much better with Linux (Fedora) than my
MacBook. On the Mac I had know the IP address, download a driver myself,
fiddle around with it. On Fedora, it said it found this printer, installed the
driver, and whether I wanted to print a test page. The test page printed fine.

That was a while ago, and things may have changed. But it's not the only
example. With OSX I am boxed into what Apple thinks is best for me. With Linux
I can do whatever I like, while most things also just work.

None of the issues you mention are an issue with Linux anymore. Wifi,
Bluetooth, printers, sound, video, etc, all just work, most (IMHO) more
intuitively than with OSX. Only for faster 3D graphics I had to install the
vendor driver from a non-free Fedora repository.

~~~
Too
> None of the issues you mention are an issue with Linux anymore. Wifi,
> Bluetooth, printers, sound, video, etc, all just work,

If you are lucky yes... With my last 2 computers and Ubuntu, most of the
devices listed above has required me to google for solutions and apply weird
hacks manually. Sound has been quite reliable but graphics and wifi drivers
are still a joke compared to Windows or osx. Even basic things such as
plugging in an external monitor has caused the computer to totally freeze.

------
smegel
> The bazaar meme advocated by Raymond, "Just hack it," as opposed to the
> carefully designed cathedrals

More like carefully designed procedures (not the software kind), meetings, and
suit ties.

The waterfall process can be as technically deficient as you want it to be,
just like agile.

------
neo2006
I like to think of software as a living entity that grow evolve, get sick and
eventually die. There is nothing wrong with refactoring or diying code it's
part of the natural process of engineering software. Sometime we take the
decision to not refactor, that's also ok as long as you know that this will
make cost you more to do it later or will code will end up diying. Your code
is like a pet, his love depend on how much effort you put to take care of him.
Sometime, it's sick by design and where ever effort you put in won't save it.
So now if we talk about how to design software that will live long and
prospere you need to guess/predict the future and have an idea on all the
forms that your software will evolve to and all the intented and non intended
use possible for it and avoid all mistakes in you design and then you will
have a warranty otherwise you need to design the simplest way and make it
evolve when needed by refactoring, fixing and patching...

------
veddox
It sounds like a lot of stuff that the author complains about falls under
Sturgeon's Law. To paraphrase him: "Sure, 90% of FOSS is crud. That's because
90% of everything is crud."

Not to lessen the many valid criticisms that the article contains, but I think
the low average quality of code can be adequately explained with the above
law.

------
qwertyuiop924
...And if you think the Bazaar precludes somebody making decisions about
quality, you haven't payed attention to how linux development works.

Except epoll. And vsyscalls. And...

You know, maybe Linux isn't well managed.

~~~
Annatar
Shhh! Don't criticize Linux even if it's true, you'll get downvoted faster
than you can bat an eyelash! That's _extremely unpopular_ here.

'Cause you know, man. Linux.

~~~
qwertyuiop924
Hey, I run Linux. I just think that it's worth acknowledging that it's got
some flaws. One of the reasons I listen to people like Bryan Cantrill. I don't
take what he and the BSD folks say at face value - If they're working on
competing projects, they obviously don't like Linux technically - but
sometimes, they're right. Especially about epoll. Jesus.

Anyway, GP is +3. It got downvoted a ton, but it bounced back.

~~~
Annatar
Competing projects or not, Bryan is correct. The FreeBSD folks also have a
pretty good clue.

~~~
qwertyuiop924
Well, yeah. But I can't trust him out of hand.

Are you a Plan9 user, by any chance?

~~~
Annatar
Nope; Solaris / illumos / SmartOS.

~~~
qwertyuiop924
Ah. I think my plan9 user detector must have been confused by the air of
smugness, and the appearance of desiring Murray Hill purism, something plan9
users have that illumos users don't (SMF, for one, isn't Murray Hill at all in
style).

So long as you don't attack my beloved sexprs in your attack on XML, I don't
care.

~~~
Annatar
I like the fathers of UNIX, it's true; I think the way they think, and over
the course of the last 30 years of using computers, I've discovered that my
experience with computers matches their experience. I can see _why_ they came
up with the tenets they did. It all makes sense.

As for XML in SMF, that is a sad part of history: at the time, _Sun
Microsystems_ had the guy who invented XML (Tim Bray, was it?) on the payroll,
and suddenly, XML started forcefully infiltrating everywhere into UNIX,
whether it was appropriate or not. I don't know anyone who's happy about
having XML SMF manifests. All of us from the Solaris / illumos community would
rather we never speak of that embarrassment again, I think.

On LISP: I would love to use it, except getting it to run on Solaris / illumos
sucks ass, and when one does get it to run, it's only 32-bit, and it likes to
coredump a lot. And I'm not going back to GNU/Linux just to be able to run
LISP. Other than that, I find LISP cool. I think that reentrant functional
programming with no state machine is the way to go, and I like the fact that
one can run LISP both interpreted and compiled into a binary executable.

I've no idea what "sexprs" is.

~~~
qwertyuiop924
I actually wasn't talking about XML, I was talking about the general design of
SMF. Murray Hill purists, champions of worse is better, would call SMF's
architecture overly complicated and call for the use of BSD init, daemontools,
s6, or runit. As a linux user, I see SMF as the good bits of systemd (service
management, etc), without a lot of the bad bits (creeping scope, gradual
introduction of lock-in, kills your screen sessions for no reason, sinister
plans for world domination...). It's almost like Lennart has an instinct for
getting good ideas exactly wrong.

As for Lisp on Illumos, which Lisp did you try? For Common Lisp, CLISP, CMUCL,
and SBCL all claim to support Solaris. SBCL is the most popular of the three,
but its solaris binary is a little out of date, so you'll have to use it to
compile the latest sources. Instructions at
[http://www.sbcl.org/getting.html](http://www.sbcl.org/getting.html).

However, I am primarily a schemer, and if you're excited about lisp-1s,
hygenic macros, tail calls, call/cc, and case sensitivity, you may want to
look into scheme. Because of the minimally defined spec, every scheme is a
little different. Chicken scheme, my favorite implementation, claims to run on
x86 solaris, although it hasn't been tested on the latest versions. It may
still work. MIT scheme explicity claims to run on any posix system, but it's
kinda dead. Guile will probably run, as GNU projects usually try to be
reasonably platform independant, but it's hard to tell. As both the above are
GNU, so you'll need GCC, but I assume that's already installed on your system.

Chibi Scheme, Gambit, and Chez Scheme are also worth a shot, although they
have less libraries than the above.

Finally, Racket, which isn't really a scheme so much a a new programming
language descended from Scheme (as I understand it, many of the Racket
implementers backed the ill-fated R6RS, but wanted to go even further, and got
sick of compromising with Scheme). I don't personally like it, but you may,
and it also claims POSIX.

If none of the above run at their latest version, which I very much doubt, you
could always try lx zones as a last resort.

sexprs is an abreviation for s-expressions. They're what make up a lisp
program, and how lispers store data: Typically, in a lisp program, your config
file won't be a unixy .cfg, but instead a lisp program, which is much more
powerful. However, like XML, it's kinda hard to grab data with regex, although
unlike xml, it's trivial to parse.

~~~
Annatar
_As for Lisp on Illumos, which Lisp did you try?_

Now before I get into the details, let me just state for the record that I've
ported 200+ (if not more) freeware packages to Solaris, and I modify, compile
and link a lot, so this was not my first "walk in the park"...

First I tried CMUCL. And after some patching, I got it to work. But only
32-bit. And it would core dump a lot. Because the GCC compiler at the time
didn't support DWARF 2 (it seems to support it now, but I'd have to build GCC
again), compiling with -g was basically worthless for debugging, so I ditched
it, because I didn't feel like disassembling x86 assembler that day.

Then I tried CLISP, and that was a lost cause, it was so busted that it
wouldn't even build. The last thing I tried was Steel Bank Common LISP, SBCL.
Unfortunately, just like other LISP's, SBCL isn't self hosting, and it
requires a working LISP to build itself. In my case, that would have been
CMUCL, except that I abandoned that a while ago when it became clear that it
would only work 32-bit and that it crashed. It's a shame, I really think that
ANSI Common LISP is the future.

 _However, I am primarily a schemer, and if you 're excited about lisp-1s,
hygenic macros, tail calls, call/cc, and case sensitivity, you may want to
look into scheme._

My understanding is that Scheme is not 100% LISP compatible, but even worse,
that it runs as bytecode on a Java virtual machine. As soon as I see a
language running on a virtual machine of any kind, I'm done. That won't enter
my systems or my network. If I cannot compile it into straight machine code
binary executable, that's it, it's out of the window, and never to return. Not
on my watch! Is my finding correct, or is there a Scheme to ELF machine code
binary executable compiler out there?

 _If none of the above run at their latest version, which I very much doubt,
you could always try lx zones as a last resort._

I created an lx-branded zone for the first time a few days ago and was just
absolutely speechless with regards to how everything _just works_. The Linux
applications literally think they're running on Linux. The only things which
gave it away as not being Linux was the completely made up kernel version and
/native, containing native illumos binaries that would be in /usr/{bin,sbin}
on illumos. It blew my mind.

~~~
qwertyuiop924
First off, does the x86 solaris binary for SBCL work? You can bootstrap off
that if it does.

>My understanding is that Scheme is not 100% LISP compatible, but even worse,
that it runs as bytecode on a Java virtual machine. As soon as I see a
language running on a virtual machine of any kind, I'm done. That won't enter
my systems or my network. If I cannot compile it into straight machine code
binary executable, that's it, it's out of the window, and never to return. Not
on my watch! Is my finding correct, or is there a Scheme to ELF machine code
binary executable compiler out there?

...ummm... wow. It's history time.

Lisp isn't a language or a standard. It's a family of languages. When you say
Lisp, you're talking about Common Lisp, which is one of the two popular lisp
standards today. The other is Scheme RnRS, with R5RS and R7RS being the most
widely implemented. Scheme isn't Common Lisp compatible, as it is a totally
different language, although still a Lisp. Some people believe that Scheme
shouldn't be regarded as a lisp, but those people are crazy.

Scheme prides itself on minimalism, originating in academia, and designed for
PL research and education, and so, much like POSIX, every implementation
extends it in a different direction. Some Schemes are suitable for Real Work,
others are not.

As scheme is a standard, it has been implemented for a variety of
architectures. While it does run on the JVM, that implementation isn't very
good. You're probably thinking of Clojure, which is JVM.

There are several schemes that compile to native code. Gambit and Chicken are
the most popular. While Guile uses ELF, it uses it as its bytecode format, and
while native compilation is planned, we probably won't get it for a good few
years. Chibi is bytecode, and will probably stay that way, although it has a
decent FFI. If you can live without native compilation, Guile, at least, is
worth looking at.

As for Chicken vs Gambit, it's pretty evenly stacked. I know Chicken better
than I know Gambit, but I'll try to give a good comparison:

    
    
      FOR CHICKEN:
      -Chicken uses Cheney-on-the-MTA compilation meaning that re-entrant continuations (think setcontext/setjmp but better) are no slower than any  function call.
      -Chicken has a really, really good library repository. For a scheme, anyway.
      -Chicken has fairly good POSIX integration
      -Previous versions of Chicken have definitely compiled on OpenSolaris.
      -Chicken has an almost ungodly helpful maintainer and community, with active mailinglists and IRC.
    
      AGAINST CHICKEN:
      -Chicken isn't reentrant
      -Chicken has no pthread support, only supporting fork(2) and pre-emptive coroutines.
      -Chicken has a really good compiler. You'll see why this is a mark against it in a minute.
    
      FOR GAMBIT:
      -Gambit has compiler that generates really, really, really good native code.
      -Gambit is, I think, reentrant
      -Gambit has okay POSIX
      -Gambit has Termite, which is erlang-style coroutines, in addition to a Chicken-like system. Guile is the only scheme I know that has pthread support, and it's not in this comparison.
    
      AGAINST GAMBIT:
      -Gambit's continuation implementation is slower and more limited than Chicken's.
      -Even with BlackHole and SchemeSpheres, the two module sets it has, Gambit is behind Chicken in modules
      -Gambit's FFI is harder to use than Chicken's.
      -I don't know how well Gambit supports Solaris.

~~~
Annatar
_First off, does the x86 solaris binary for SBCL work? You can bootstrap off
that if it does._

One of them didn't work for either i86pc (that's what we call the x86 and
x86_64 platform on Solaris) or sparc, and my requirement is that it has to
build and run on both exactly the same, as 50% of my server park is
UltraSPARC, and 50% various forms of intel-based processors. But I'd have to
look at that again. _Something_ didn't work, or else I would be running SBCL
by now.

 _While Guile uses ELF, it uses it as its bytecode format, and while native
compilation is planned, we probably won 't get it for a good few years. Chibi
is bytecode, and will probably stay that way, although it has a decent FFI._

Now see, _that_ is a severe step back _for me_ : I had bytecode back in 1984,
except then the "virtual machine" was called "Commodore BASIC", and "bytecode"
was known as tokens, which is what they really are when you learn about AST
parsers and trees in computer science at the university. So when I see
"bytecode", that's a throwback. And a very bad, bad throwback, masked with
pure brute force in terms of massive amounts of memory and processing power.
It was a very bad and inefficient, slow solution then, and even with all this
processing power and memory, it's a very bad and inefficient, slow solution
now. I have never, and will never tolerate that. No amount of programming
pleasantness and syntactical sugar will change that.

The way forward is clear to me now: ANSI Common LISP.

~~~
qwertyuiop924
...Did you miss the 1/2 of my post where I talked about two of the best
native-code compilers scheme had to offer? Gambit and Chicken are both
fantastic, and compile to C, no bytecode anywhere.

Also, your understanding of bytecode is deeply, _deeply_ wrong. It's accurate
to an extent, true, but Bytecode isn't the same thing as an interpreter, and
can be _much_ faster.

But even if bytecode is unacceptable to you, as it does indeed sacrifice
performance and I won't deny that, I just gave you 2 native code compilers.

Did you just not notice that, or is there a reason you found them
unacceptable?

~~~
Annatar
I managed to miss it! FAIL.

Based on what you wrote, Chicken is out - I want reentrant.

...

I got Gambit to compile, simple expressions appear to work - but I cannot
figure out how to compile them to a binary executable.

    
    
      cat t.scheme
      (define Hello "Hello")
      gsc t.scheme
      file t.o1 
      t.o1: ELF 32-bit LSB dynamic lib 80386 Version 1, dynamically linked, not stripped, no debugging information available
      ./t.o1
      Segmentation fault (core dumped)
    

...so that needs more research. I also haven't tested on sparc.

~~~
qwertyuiop924
First off, the standard extension for scheme files is .scm

Not really relevant, but I thought you'd want to know.

Since Chicken's non-reentrance only matters if you're passing callbacks into C
code, I'll assume you're doing that. It should be noted that Chicken can sort-
of have callbacks, so it may be acceptable for your use-case. Either way, be
careful to return from C calls in the same order you entered them: not doing
so will crash your program. It might seem hard not to do this, but call/cc
makes it very easy, and having another thread return into C causes the same
problem, since scheme execution threads are pre-emptive coroutines, and are
thus one thread from C's perspective.

The problem with the code above is that by default, Gambit generates a shared
library for use in other compiled scheme code or a repl. To get an executable,
you must specify -exe:

    
    
      gsc -exe t.scm
    

Note also that Gambit's configure script option page
([http://gambitscheme.org/wiki/index.php/Configure_script_opti...](http://gambitscheme.org/wiki/index.php/Configure_script_options))
provides some handy options to feed to the configure script to speed up the
interpreter, notes the reccomended use of GCC (although GCC is not
necessary!), and also specifies how to use Sun's compiler to generate 64-bit
code on SPARC architectures, if you want to do that. If you're using GCC, this
shouldn't matter.

Also, I'd reccomend reading both gambit's manual and the R5RS standard.
Knowing the standard and your implementation's deviations and extensions to it
is important. Be sure to read the wiki as well, as gambit's manual is not as
complete as the wiki.

schemers.org and the scheme wiki have a lot of aids for learning scheme,
especially some of the stranger parts, like call/cc.

The Scheme Programming Language _Third edition_ (not the 4th edition, which
covers the much-maligned R6RS), and The Little/Seasoned/Reasoned Schemer
series can help you _think_ like a Schemer. Scheme is thoroughly multi-
paradigm, so this is important.

Finally, there's a lot of gambit libraries all around. The Black Hole and
SchemeSpheres module systems contain most of them, although some of
SchemeSpheres is linux-specific. You may also take advantage of SLIB, one of
the few portable scheme libraries, which contains a wide variety of handy
utilities, and SXML, for processing and outputting HTML/XML without the hell.

Whatever you do, you'll probably have to write a lot of FFI code, because the
library you want probably isn't wrapped. It's a little better in other
schemes, mostly Chicken, but not really that much. It's just the price you pay
for using a less popular language.

------
ktRolster
The article says that to have quality, someone needs to be responsible for it.

Of course, the "Bazaar" included the project named "Linux," which does have a
single person in charge. So "single person in charge (or not in charge)" isn't
really what the bazaar is about.

More generally, the article laments the lack of quality in modern software. I
don't think that's a problem of Cathedral vs Bazaar, though, since software
designed top down with authority telling everyone what to do can be low
quality (indeed, you can see this sort of thing happening in many scrum style
companies).

Rather low quality software is a reflection of the skill of the people who
built it.

------
davidw
> compiling even my Spartan work environment from source code takes a full day

Maybe compiling everything on every computer is not the best approach...

~~~
phkamp
For everybody ? No.

But some minority have to take the open SOURCE seriously, if we want it to
actually work.

I'm part of that minority.

~~~
emmelaich
I don't understand that point of view at all. If you control the dependencies,
you get a predictable result.*

Why do it again and again? To be sure to be sure?

* Debian's work on reproducible builds will help here.

------
duncan_bayne
ESR agrees with the criticism of Autotools:
[http://esr.ibiblio.org/?p=1877](http://esr.ibiblio.org/?p=1877)

"Autotools must die"

------
Jtsummers
> This is probably also why libtool's configure probes no fewer than 26
> different names for the Fortran compiler my system does not have, and then
> spends another 26 tests to find out if each of these nonexistent Fortran
> compilers supports the -g option.

It's been awhile since I've dealt with autoconf and company, but I remember
seeing things like this. This is bizarre, to me. It _should_ be a simple
process. Make a pass to detect fortran compilers, make a second pass to detect
support for -g. (This is, of course, assuming that we even want to use this at
all.)

I remember cleaning up a coworkers project that did something similarly inane.
A pass over the data revealed something _wasn 't_ present. But his test
scripts continued processing, including rereading the original data, as if the
something were present. Wasted 20 minutes each time it was executed until I
rewrote it to break early.

    
    
      If X depends on Y and Y is absent, then don't do X.

------
futuremeats
What are some examples of cathedrals? Based on my reading of the original
article, these would need to meet two requirements...

1\. Software has been in service for a long time (let's say more than 10
years, though some will debate the timeframe).

2\. Software is recognized as of enduring, well-conceived design.

The second point is an important one. I have read that there are some
government (IRS tax calculation) and industry (airline reservation systems)
which have been in place for far longer than 10 years, but their longevity is
more a function of the risk-aversion of the organizations that needed them,
and less a testament to their "beauty."

------
aeorgnoieang
_rolls-eyes_

This is a terrible, whiny perspective and the author could use a heaping dose
of economic thinking. Sure, there's lots of 'terrible' software and it would
be great if it were better or there was less of it! Except there are real
costs to improving software or to preventing 'terrible' software from being
released, or used, or disseminated.

I am so happy the author is not a position to make decisions about this stuff
for everyone else.

 _All software_ is to some degree a hack or kludge.

------
jmspring
When it comes to ecosystems, the definition of the Cathedral becomes
interesting. Say what you will about business practices in the 80s, 90s, early
2000s...

In a sense Windows was a cathedral. It bent over backwards while accepting the
new. Welcoming many under the umbrella.

Sure it is a stretch, but the Windows platform (for years) required backward
compatibility. That was the world Microsoft worked in.

Today, the OS still maintains cathedral aspects, but the company is embracing
the bazaar way more than I ever thought I would see in my lifetime.

------
hyperpallium
Successful bazaars have a flat architecture, meaning you can easily add
another component, to handle a new case.

It's not always the ideal technical architecture for the problem.

------
cbsmith
I still hate this essay as much as I did when it first published it. It
misunderstands the problems.

Unix was fragmented _long_ before 1991, when autoconf was created. Prior to
autoconf, Perl's Metaconfig did this dance, and a lot of other, lower quality
systems like imake were used to work around the problem. The forces behind
Unix's fragmentation was actually not the Bazaar (open source software), but
attempts by multiple parties to independently build their own Cathedrals (in
attempts to differentiate/provide value with proprietary systems). Of course,
autoconf, as a GNU system, was, in Cathedral & Bazaar terms, a Cathedral, not
a Bazaar.

Autoconf is ugly/copy-paste code because it is comparatively rarely run, so as
long as it produces a passable result, people use it and focus their efforts
on other, bigger itches.

There's also a long history of open source products which have enjoyed higher
quality implementations than their proprietary equivalent. The Apache project
itself stood as a higher quality web server than a lot of other choices (not
as true today, but it was true for quite some time).

One could go on...

~~~
phkamp
I think you misunderstand my essay.

Yes, autoconf was clearly a response to the incompetence of the UNIX industry,
were everybody thought they could become Da Man over everybody else.

But that doesn't mean that it was the right solution then, and it certainly is
not the right solution now.

And autoconf is absolutly nothing like a cathedral: There is no archtectural
vision, no economy of style, no consistency and absolutely no symmetry or
simplicity.

Your argument that "autoconf [...] is [...] rarely run" is akin to saying "I
only pee in the shower in the morning": That is of course better than doing it
all the time, but it is still going to stink.

~~~
emmelaich
But really there are only two ways to deal with this. 1\. make a better
autoconf 2\. avoid it as much as possible.

The first has been tried, many times and always found wanting.

The second - well, it's FreeBSDs fault using software 'ports' rather than
prebuilt binaries. (Although I understand that you do that now)

~~~
cbsmith
It's not entirely true that a replacement for autoconf would just have to be a
better autoconf. You don't need a solution that works at all like autoconf
(doesn't even have to generate code at all). You can avoid MUCH of what
autoconf deals with by writing portable code and following some sane
strategies for constructing include paths and selecting library files. It's
just that when you do that, you have to solve all of these problems yourself,
while the autoconf library has already been battle tested on a wide variety of
platforms... a level of effort that would be hard to replicate.

~~~
emmelaich
Agreed, well said. I didn't mean a literal better autoconf, I meant that the
attempts to replace it have rewritten the problem to a lesser, simpler one.

------
sbarski
One of my favourite ACM articles. I think I read it twice the first time I
came across it. Also bought "The Design of Design" thanks to it.

------
kuharich
Previous discussions:
[http://news.ycombinator.com/item?id=4407188](http://news.ycombinator.com/item?id=4407188)
[https://news.ycombinator.com/item?id=8812724](https://news.ycombinator.com/item?id=8812724)

------
haberman
There are way too many dichotomies that are getting muddled in this
discussion:

    
    
      - hacky vs. beautiful
      - professional vs. amateur
      - waterfall-designed vs. evolving
      - centralized control vs. consensus control
      - crufty vs. well-maintained
      - open-source vs. closed-source
    

With so many axes to argue about, we can project whatever we are feeling about
software today onto this "cathedral/bazaar" debate. We may think we are having
one discussion, but we're actually having six discussions (or more) at the
same time.

~~~
qwertyuiop924
Hacky vs beautiful is diferent from The Right Thing vs. Worse is Better.

Worse is Better is a very specific philosophy (at least, as it is described in
_Lisp: The Good News, The Bad News, And How to Win Big_ )

The Right Thing philsophy is, in essence, that simplicity of _interface_ is
the most important thing: every interface is an abstraction, abstractions
shouldn't leak. Have a simple interface, be correct, be consistant, be
complete. If the implementation is complex, so be it.

By contrast, the worse is better philosophy prioritizes simplicity of
_implementation_ : Be simple, be as complete, correct, and consistant as
possible, but between having a simple design, and being slightly wrong, a
little inconsistant, and not as complete, and having a complex design that is
all of those things, take simplicity: 90% now is better than 100% never.

Over time, Unix has adoped some Right Thing ideas (plan9 took this to 11), and
Lisp has adopted some New Jerseyisms (Scheme's dynamic-wind is definitely
this).

~~~
haberman
You are right. I wrote that too quickly before thinking about it. I removed
that part of the post.

~~~
qwertyuiop924
You are far from the first person to misunderstand/misrepresent worse is
better. The best part is when people recommend reading the essay, and then
demonstrate they they don't know what it meant: It's not about the battle
between simplicity and complexity, it's about the battle between simplicity of
_interface_ and the simplicity of _implementation_.

~~~
haberman
I definitely understood the distinction when I previously read the article. I
just didn't reload enough of it into mental cache to remember.

~~~
qwertyuiop924
I get it.

------
sgt101
petty point; the author says "growth by 2 orders of magnitude or 10000%" is it
just me or is that 4 orders of magnitude?

~~~
mdpopescu
10000% = 10000 / 100 = 100 = 2 orders of magnitude

------
gavinpc
Mods, please add 2012. Most recently discussed here at
[https://news.ycombinator.com/item?id=9403124](https://news.ycombinator.com/item?id=9403124)

------
k__
I like the Bazaar.

It's a bit like "normal" people sticking it to the elite. The rest of the
world getting a piece of the cake too...

~~~
droopyEyelids
In your analogy the people who get stuck trying to fix crazy shit with
makefiles are the elite. Which will probably be you one day. Which means
you're sticking it to yourself.

~~~
pjc50
In Free Software, everyone gets a turn at being the Man.

