Sure, the cathedral structures look elegant and eternal. That's because time favors the stable ones. The other ones fell down long ago. It's the same reason that the orbits of planets are seemingly so well-ordered: everthing that wasn't in such an orbit has crashed into something else and disappeared. (Look at the surface of the moon for evidence of that.)
Music and literature work like that. We think the age of Mozart was particularly wonderful, because we have Mozart's music. But there was plenty of other music in that era, most of it bellowed by bad musicians in taverns. That's not so different from today's music scene.
Why shouldn't software work like that? Why isn't Dr. Kamp's lament morally the same as the lament of the guy with the cabin in the woods? I'm talking about the guy who finds himself with more and more neighbors, and grouses that "nobody respects the wilderness any more."
We all respect elegant lasting structures (fork / exec / stdin / stdout is one such structure). a creative field with huge output will generate more elegant lasting structures. We just can't tell which ones will survive from where we sit in the middle of it all.
That's easily disproven-- even musicologists tasked with cataloging Mozart's output erroneously attributed symphonies and other pieces to him that were written by lesser known composers.
The reason attribution is such a hard problem is because it's non-trivial to separate the stylistic characteristics of late 18th century courtly and sacred music from the characteristics of Mozart's music we wish to claim were inspired by his genius. (There's a great paper written I believe by Rifkin on Josquin scholarship having been circular in just this way.)
What your royal "we" finds "particularly wonderful" about the "age of Mozart" is the style, not the particular composer. And there is plenty of well-written, beautiful symphonic, choral, and chamber music written by all kinds of composers of that period.
The modern world does not strive to have a composer in each and every town who can competently write compelling tonal music with the constraints of late 18th century form, texture, and counterpoint. As misguided as it may be, the longing of your royal "we" for the age of Mozart is a valid longing, regardless of what drinking songs bad musicians were playing in biergartens.
The conclusion is that for your remark to be a proof, during the latter half of the 18th century no more than 30 new works of music was made. Importantly, by the structure of the original claim, this number necessarily includes all popular music played in taverns and the like, of which there was certainly at least 3000 across the whole of Europe.
I'm refuting OP's claim that people today are generally hearing something "particularly wonderful" in Mozart's output and applying that level of extraordinary output to music from the "age of Mozart." In order to do that, all I really need to get started is a single case of a scholar tasked with cataloging all of Mozart's "particularly wonderful" output and mis-attributing the work of a "lesser" composer to Mozart. There are in fact many cases of that. So if scholars can't always tell the difference between "piece written during the time of Mozart" and bona fide "output of Mozart," it brings into question OP's initial assumption that "we"-- e.g., non-expert listeners-- are generally hearing something "particularly wonderful" in Mozart's output. It's much more likely we're hearing something "particularly 18th century"-- in terms of lightness of texture, balance of phrasing, catchy melody-and-accompaniment textures, etc.-- finding that wonderful, and ascribing that wonder to the genius of Mozart because we generally don't seek out the work of other composers from the period unless we're experts.
The point about bad musicians in bars is irrelevant. If we're to be generous, we understand when people pine for the "age of Mozart" they're speaking about traveling back in time to the late 18th-century (probably to Vienna) as a rich person, probably of nobility, and seeking out all the venues where the type of music Mozart became famous for was performed. That would be concert halls, churches, various subscription concert venues, and opera halls. The music of other composers played in those places would have had the same characteristics which "we" today find wonderful in his output. It would have also been performed quite well. If anything it would convince such a time-traveler beautiful music in that style does not require a one-in-a-civilization genius to produce.
Also-- even the narrow assertion that late 18th-century Austrian bar music is "not so different" from music today is almost certainly wrong. Just comparing bars to bars-- in nearly every facet of music I can think of-- melody, rhythm, harmony, tempo, genre, lyrical content, timbre, form, allusions to other music, spatialization, improvisation, price, accessibility-- we have vastly more variety in terms of what you might hear when you walk into a bar. And that variety ends up providing way more quality music today than was available or even conceivable to your average Austrian bargoer in the late-18th century. Just consider it-- if you got transported to the past and came back you'd say you heard some repetitive out of tune music. If one of them got transported here and back they'd spend the rest of their lives trying to explain how some guy's cheap walmart keyboard "can produce all the sounds of our modern orchestra." It's not even close.
You are simply reading something that isn't really there. OPs original claim is clearly that the music production of the time was much larger than what survives (in popular culture) to this day, and that the surviving body of works has been selected for quality.
Your point amounts to little more than insisting the time period shouldn't be called "the age of Mozart", which is fine for what it is: a discussion of nomenclature.
It's possible to go too far with this kind of thinking. Yes the pyramid still in pristine condition is the best built one, with all the shoddy pyramids long collapsed. That does encourage a bias. This does not mean any "golden era" thinking I necessarily fallacious, just that we need to watch out for a particular fallacy.
Between about 1967-73 a lot of good rock albums were recorded. A lot of shite too, mostly forgotten about. The filter of time applies equally to the next 7 years, but most people's record collections feature more "golden age albums."
Yep, it's the good old nostalgia filter in action. You see the good stuff that survives the passage of time, but ignore the crap that gets rapidly forgotten:
It presupposes either a linear progression towards better things, or a constant quality.
The reality is that periods have ups and downs, the middle ages were not as good in many things as the Ancient Greek and Roman times, the period after the Renaissance wasn't as good as it, the WWI and WWII era weren't as great as the periods that followed, etc.
This can happen across every industry, art, field, including politics and science.
There are always regressions, ups and downs, stagnation periods, local maxima we waste time on, etc.
It's good to recognise them, as opposed to knee jerking "it's just nostalgia speaking".
Even back then CreateProcess() would have worked just fine.
Pseudo-threading is really the only reason you should ever have to make a direct call to fork().
Having worked with plenty of software developed by the priesthood in the era of the cathedral might I observe that it was mostly fractally terrible. For every UNIX there were so many terrible pieces of software, and UNIX itself is pretty fractally horrible. (See “The UNIX hater’s handbook” for thorough criticism of UNIX as it was when when Raymond published his book.)
Bits don‘t fall apart.
If you can have a perfect copy of something you can easily keep it in existence forever.
If not, your artifact will sooner or later decay. It will only prevail through conscious action, being subject to incremental evolution of thought. Which copies are not.
The problem is often that the economics does not favor the perfect. I'm working on a project for a client now where I know there are lots of ugly warts in the backend because we've been building abstractions in parallel with building the rest of the system and have not always had the time to go back and revise the old code.
I would love to go back and sort that out. But to the client it is an extra expense now, both in terms of money, but more importantly in time spent now, vs. paying for it increased maintenance costs for people to - hopefully - clean it up over time later.
But there's also often the other cost of learning and determining if a shared piece of code is appropriate.
E.g. there's about half a dozen Ruby gems for handling ANSI escape codes, but for a recent project I ended up writing my own, because none of them seemed to cover what I needed, and it's not clear they should. A lot of code duplication happens because the cost of trying to avoid it often far outweighs the cost of maintaining code you know is doing exactly what you want it to.
I do agree with his hate for the autotools family, though.
One of the reasons I suggested that folks take a look at previous threads is that phkamp clarifies his definitions of "cathedral != closed source / proprietary / commercial" and "bazaar != open source".
It's not obvious in the acmqueue essay but phkamp wants "cathedral" to be synonymous with "coherent architectural design". As a result, his example of having a bunch of open-source hackers add duplicate dependencies such as a TIFF library when the final build of Firefox doesn't even read tiff files is not "cathedral" and therefore not "quality".
The acmqueue article is not written very clearly because he uses metaphors "cathedral / bazaar" which makes people associate "cathedral" with entities like Microsoft which he didn't intend and not associate "cathedral" with pre-1980s commercial UNIX which he did intend. Obviously, he intentionally reused "cathedral/bazaar" because it was a refutation of Eric S Raymond's metaphors but nevertheless, recycling the same analogy sent multiple people down a road of imprecise arguments. Indeed, one of the comments in the 5-year old thread is an ex-Apple employee (jballanc) categorizing iOS development as "bazaar" but phkamp classifies it "cathedral". Talk about torturing metaphors until people talk right past each other!
 no need to read all ~300 comments, just Ctrl+f search for "phkamp": https://news.ycombinator.com/item?id=4407188
It may not be the quality he wants, but architecture is not about maximizing quality, but maximizing quality within a budget.
Yes, that was Eric S Raymond's particular examples. However, phkamp extended it with phkamp's particular examples (Bell Labs UNIX (cathedral), Apple iOS (bazaar), Firefox (bazaar), Microsoft (neither bazaar nor cathedral?), etc).
I should emphasize that it doesn't matter whether phkamp's classifications are "correct". (Indeed, the previous thread showed many disagreed with his taxonomy and 2 ex-Apple programmers contradict each other on iOS.)
Whatever idiosyncratic definition phkamp wants to use for "cathedral/bazaar", the point is readers have to be aware of it to understand his acmqueue essay. (As I said previously, the freestanding article does not make his thesis clear so it lead to subsequent discussions that misunderstood what he wrote.)
my observation has been that getting the first 90% of features/quality is pretty cheap and then as you start to try to close in on 100% features/quality for whatever your domain is the cost of improving software increases asymptotically somewhere short of 100% features/quality.
What people will pay for software seems to mostly depend of the costs of failure. 90% working software is almost always a better value prospect than 100% working software, unless your software failing means someone dies, or you place a trade that is off by $100 million.
In industries where the costs of failure are low (consumer, most internet, etc) the quality of software that the market will pay for is very low, and so the quality of the software being produced is very low.
At that point, work (in the Physics sense) is required to lift it back to the outer energy level -- that of both working and being understood.
In many cases, a reasonable option will be to leave it at the degraded level and continue to utilize it (after all, it still "works").
This all seems obvious to me for the first time, as I look at it from this perspective. But it has never dawned on me before, probably because every inch of my soul longs for the beauty of the highest level of the hierarchy -- "works and is understood".
First they sell you a REST API, then it is more of a RPC, so you have to restructure.
Later many endpoints return crap, even if they worked, so you start ignoring the responses.
I think this is mostly because
> most [...] code I have to work with has an objectively worse quality than most [...] code I choose to work with.
So popular open source code will either be relatively clean and/or have other important qualities that lets people overlook how messy it is.
For proprietary code, on the other hand, we often don't get to pick and choose.
And deadlines are still a part of open source software development. Most the really widely used projects are heavily supported by corporate patronage, and you'd better believe that those companies are creating deadlines for the people they pay to contribute to those projects based on their own business needs.
For my part, to me it seems that "enterprise vs non-entperise" is a better predictor of code quality than "closed vs open source".
Deadlines are not the only reason for crappy code and projects without them are not necessary shiny examples of beautiful engineering. Whether closed or open, it often takes multiple attempts till you get architecture right.
I don't think the contrast he's drawing is bazaar/open source vs. architected/closed source. It's versus strongly opinionated & or clearly-led open source, e.g. varnish, qmail, Rails, sqlite etc.
Windows is even worse. It's amazing it works at all. Linux is generally cleaner in the auth department but has many other abominations.
The only clean software is either software nobody uses or software whose scope is rigorously limited. The latter is very very hard to defend when people are using it. Everyone wants it to do some new thing or be compatible with some other system.
In a large org it is never clear, who is responsible for what... ;P
And the result is sometimes even more code duplication, workarounds and bad examples of Conway's law in operation, than in free software.
Reading phkamp's comments in those threads adds more context to the ACM essay.
Too many comments to list here in:
Three comments in https://news.ycombinator.com/item?id=12251323 :
Two comments in https://news.ycombinator.com/item?id=8812724 :
> ...there is no escaping that the entire dot-com era was a disaster for IT/CS in general and for software quality and Unix in particular.
The dotcom era was almost 20 years ago. The Netscape IPO (1995) is now closer to the IBM PC (1981) than it is to 2017. Yes, a lot of terrible engineering happened during the dotcom boom, and most of it was proprietary code running on old-school Unix servers.
If anything, Linux was a pretty solid alternative in those days. Sure, the GUI was a bad joke, and driver support was hit or miss. But if you chose your server hardware well, then Linux would give you years of uptime. Windows fell over several times a week. (Microsoft didn't get truly serious about quality and security until sometime around XP SP2, when they finally got sick of malware and buggy drivers.)
So in 2017, what does a "bazaar"-style project look like?
One modern example of a "bazaar" is Rust. (If you dislike Rust, you can insert plenty of other examples here.) All major design decisions are made through RFCs, after community feedback. There's no benevolent dictator, but rather a core team. Several key libraries like "serde" are actually maintained by separate teams. The compiler is released every 6 weeks. And yet, I have absolutely zero complaints about Rust's QA. There are plenty of test suites, and no code gets merged to master until those tests pass. To prevent regressions, somebody downloads all public Rust packages from crates.io, compiles them, runs their test suites, and checks for regressions. And so at work, we can upgrade our Rust compilers without fear.
Now, to be fair, Rust is not for everybody. Not everybody wants "a better C++ minus the foot guns," or enjoys references and generic types. But if you do want those things, Rust is a good example of a community-driven project which delivers reliable, well-designed software.
And I could point out 20 more modern open source projects with solid design and solid QA. We know how to do this now.
And those inexperienced dotcom-era graduates now have houses in the suburbs. They're planning how to pay for their kids' college educations, and more than a few of them have some grey hair.
Then you should point them out, if they are of a standard you think people should learn from.
Rust also doesn't require anyone proposing RFCs to be at all familiar with the compiler internals; all that RFCs require is a loose approximation of how a given feature will change the documentation and the theoretical language reference.
And while the Rust compiler was certainly a mess before 1.0 (due to being written in an ever-changing version of itself), nowadays the compiler is much cleaner. It would also be mistaken to suggest that only a few people ever touched the compiler, as the Rust repo has, AFAICT, one of the highest contributor counts on all of Github (apparently higher than Ruby, Node, Swift, Go, Clang... so far the only ones that I've found that exceed it are rails/rails and torvalds/linux (listed, cheekily, as having "∞ contributors")).
RFCs would still get rejected as "postponed" if it wasn't possible to implement them (i.e dependent on MIR). Implementability was the deciding factor from what I could see, though compiler improvements would usually render them un-postponed/rejected later.
Overall I think the lack of an actual specification hamstrung it real bad, since everything past one step ahead only existed in peoples heads. And, at least me for me, it can't replace C++ (in every situation) as much as I'd prefer not to have to use that abomination of a language.
Indeed, but postponed emphatically does not mean "rejected". :) It means "we think we still might want this, but we don't yet have enough information to ensure that it will work nicely with the rest of the language". Sometimes this is because of prerequisite implementation work, indeed, but other times it's simply because designing a feature takes time and energy (gated on the language/library teams, in which sense Rust isn't really a pure bazaar at all) and not all features are equally prioritized.
> Pre-MIR, I'm fairly sure the people who would touch the type/lifetime checker could be counted on one hand, and it was borderline obfuscated due to all the pointless type theory thrown around.
Firstly, one of the places where type theory definitely isn't pointless is when implementing a type checker* (of which borrowck counts as well). :P Secondly, the difficulty of contributing to type checkers in general has little to do with implementation and lots to do with the aforementioned type theory. Thirdly, even pre-MIR, the librustc_typeck directory had 105 contributors, and the librustc_borrowck dir had 60 (and that's only counting back to Nov 2014).
This is getting off topic, but that hasn't really been my experience as a user of Rust. Yes, the compiler is full of weird ancient code, but that doesn't seem to constrain the design of RFCs. As far as I can tell:
1. The ancient code in the compiler means that some highly desirable features have taken a couple of years to implement. Granted, some of this is being fixed by the MIR work. But things like stable async/await are probably going to take a while.
2. Some desirable features are constrained by backwards compatibility with existing Rust source. This has affected the module system work, for example.
3. Rust has moved into a particular ecological niche: Precise control over memory allocation and copying, generic types that are monomorphized at compile-time, etc. The best-known language in this space is C++. And so Rust needs to wrestle with many of the same issues that C++ has, such as whether to allow partial specialization.
It's certainly not a perfect process, and not everybody will be interested in the particular tradeoffs Rust chose. But it's still one of the best "bazaar"-style projects I've ever seen.
@downvoter: care to explain?
Even Ebay used Solaris for their back end for a time. See https://www.linuxtoday.com/high_performance/2001011200106psh...
SunOS - which well known dotCom era startups run their web servers on SunOS? (beside Sun) There was also SGI with Irix, etc but beside some graphical niche market, etc. How is this relevant to dotcom?
Pretty much everybody. Remember those absurd Sun Microsystems ads that claimed "We're the dot in dotcom"? They had $15bn in revenue in 2000; even mighty Microsoft only had $22bn that year.
Linux was so far behind Solaris and the other commercial UNIXes in features and security in 1998-2000 that it wasn't even a blip on the radar.
I just thought I'd clarify this for those among us fortunate enough to not have been acquainted with z/OS: the author does not mean that z/OS is a Unix. Rather, he is refering to the fact that there is a unix "mode" included in z/OS, Unix System Services, that can be started from inside the z/OS command line, TSO, to drop you to a Unix shell.
It's a full Unix OS, too, complete with, well, for example- ed. So you can experience the supernatural horror of getting stuck insid ed, inside the z/OS command line, on a mainframe. Which is a bit like being in hell and having a nightmare about being in hell and having nightmares.
There's a hierarchy of systems:
* non-working, random, chaotic
* working, but not understood/understandable
* working, and clearly understood by at least 1 human brain
Clearly, each step in the hierarchy is less primitive than its predecessors. Also, since human brains are the greatest intelligence we know of, the 3rd bullet point seems like the pinnacle.
not really. selected is the wrong word - survived is more appropriate. evolution isn't a set of preordained occurrences.
it's a random mutation which happened to survive - this can be due to many reasons (only one of which is "it worked")
It’s literally in the name “natural selection”. Survival is selection.
I can say that, for the last seven years, I ran our little company as a Cathedral and not as a Bazaar.
We didn't import many packagess in PHP. We wrote our own code and made sure it integrated well with everything else.
We didn't run package managers and didn't freeze versions. We wrote all our own stuff in the app layer.
I can tell you, it's been... interesting. We know at the end of the day we are responsible for fixing stuff if it breaks. Not waiting for someone else to approve our pull request.
Since Raymond's book, something major happened: DISTRIBUTED VERSION CONTROL SYSTEMS such as git became mainstream.
So now you can build cathedrals IN THE BAZAAR!
Anyone who wants to maintain package X can fork it and maintain it. Maintainers gossip their improvements in a sort of blockchain consensus protocol. If a maintainer falls behind, the package continues to be maintained by others.
What I would like to see is MORE emphasis on content-addressable streams of data with history, instead of on the domains / urls from which they are fetched.
In other words, more IPFS and less HTTP.
That is the best of both worlds, and I describe it here: https://github.com/Qbix/architecture/wiki/Internet-2.0
The industry has grown much larger and more quickly than the talent base. If the NBA were to suddenly add 200 teams, the average quality of each team would go down.
How many “pretty good” programmers do you know now sitting in Director level or CTO positions? How many programmers with good aptitude find themselves being promoted prematurely as an organization struggles to keep that talent on staff?
The quality of the average software team has been greatly diluted, the industry shows too little regard for experience, and hackers end up running the show on so many software teams. Naturally, the software looks like it was coded by hackers as a result.
In the open source world, bug fixes often come much more quickly. If fixes don't appear, we at least have the source and can try to fix it -- though sometimes that's a high bar.
If there is an obvious need for a tool or library, someone implements it and throws it up on GitHub. Sometimes the implementation is great, sometimes it is little more than a starting point for our own solutions. But at least we have the source to act as a starting point.
Quality is important, and open source often encourages good quality, but as the writer points out it does not always do so. Sometimes it acts more like a neural net AI algorithm that keeps failing and failing until one day it succeeds.
What open source does encourage is a rapid iteration as developers all over the world look for solutions to known problems. It's a messy solution, but it works surprisingly well. It also helps spread knowledge of how what the nuts and bolts of good software looks like.
If you're quoting numbers in percentage terms, and that term is more than 100%, there's proabably a better way to express it, in this case, "100 times" (or 100X if you prefer).
OK, BSD ports have complex dependencies. That is optional, OpenBSD ports have relatively few dependencies. Is this a good thing or does it lead to more duplication? Packaging is a hard problem. What does the development model have to do with the result? The Bazaar idea is about developing the software in the first place, not what happens to it afterwards.
In the other cases, however, it is a project to build a new car with new kind of square wheels because committee is a band of people who have some members that have never seen a car, some members that think it should go on tracks of Japan, some members that just like coloring spreadsheets, some members that wandered in directly from a bar after a happy hour. Oh and dont forget that guy who did something in a security field of the US Army, who keeps insisting that the wheels should be designed to withstand an IED.
Most of bazaar projects are the 2nd, not the first.
Caveat: I haven't read Raymond's book myself.
Because there are 1200 libraries that basically do the same thing just with slight variation it means that if something is good in 500 of them a few of them will survive even a catastrophy like a bubble burst.
And really funny is that it is also more scalable. Without architecture and oversight many more of the integrations between these little, inexperienced silos fail. But because there are so many more who could start, again a few will make it because they were lucky enough to do the right experiment at the right time.
Surviving in such a jungle is of course harder, which is why we engineers, looking for a rather calm life, hate it so much. We want to focus on the code, not on survival. That's why we turn the facts and act like the Bazaar would be less successful where everything actually shows that it really is more successful.
If you are like me, and really want to become successful, even if it means leaving your assumptions behind and accepting that you might've been wrong, then look out for people who survive quite well in this jungle, and learn from them. It's learning by doing though, since what you write into a book today might already be irrelevant tomorrow. And there are books with the more abstract concepts out there, but without feeling and doing it, you can't even understand them (but will think you do anyways).
Real life feels more like learning Vim than learning Microsoft Word, but in a similar fashion it also offers rewards that are unexpected and come in sizes one wouldn't even dream about.
libmpv has very well documented C interface , although I haven't had any occasion to use it. I haven't dug through the project source either, so I'm uncertain of its quality level.
Fuchsia  looks very promising. Not everything is documented yet, but many of the parts which have docs are great. It's a bit of a rabbit hole though; when I first encountered it I spent a long time reading through their docs and poking around.
If I'm just starting on a program, and I realize that I zigged where I should have zagged, I just go through a handful of small files and make them zag. I believe Knuth did the same thing and re-wrote tex after his first attempt.
Once the system gets larger, updating every boundary and interface to something sane falls somewhere between costly and insane. Don't get me wrong, I'm all for a re-do, it's just that nobody in the private sector is going to foot that bill until we're in crisis mode, and the wizards who remember what the mistakes were aren't getting any younger...
It has an incredible number of contributors, sometimes with differing agendas, and with local knowledge.
And it shows. The API design of Linux is atrocious. It works well and supports a fantastic number of use cases, but it has no global coherence.
cgroups is a good example: https://lwn.net/Articles/574317/
Because they had no precedent to work with for cgroups, the initial design was botched. Linux still relies on the "bones" of Unix, and whenever they have to do something new, they struggle with the API design.
>This is probably also why libtool's configure probes no fewer than 26 different names for the Fortran compiler my system does not have, and then spends another 26 tests to find out if each of these nonexistent Fortran compilers supports the -g option.
>That is the sorry reality of the bazaar Raymond praised in his book: a pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT "professionals" who wouldn't recognize sound IT architecture if you hit them over the head with it. It is hard to believe today, but under this embarrassing mess lies the ruins of the beautiful cathedral of Unix, deservedly famous for its simplicity of design, its economy of features, and its elegance of execution. (Sic transit gloria mundi, etc.)
There is some criticism that states that Autoconf uses dated technologies, has a lot of legacy restrictions, and complicates simple scenarios unnecessarily for the author of configure.ac scripts. In particular, often cited weak points of Autoconf are:
General complexity of used architecture, most projects use multiple repetitions.
Generated 'configure' is written in Bourne shell and thus Makefile generation is slow.
Some people think that 'configure' scripts generated by autoconf provide only manual-driven command-line interface without any standardization. While it is true that some developers do not respect common conventions, such conventions do exist and are widely used.
M4 is unusual and unknown to many developers. Developers will need to learn it to extend autoconf with non-standard checks.
Weak backward and forward compatibility requires a wrapper script.
Autoconf-generated scripts are usually large and rather complex. Although they produce extensive logging, debugging them can still be difficult.
Due to these limitations, several projects that used GNU Build System switched to different build systems, such as CMake and SCons.
The biggest irony here is that in the Multics days the Unix design was considered an ugly heretical hack. It's all in the eye of the beholder.
It's particularly hard for programmers and engineers to know when to buck convention, because the combination of breadth and depth of domain knowledge makes it really hard to ever attain "expert" level in both.
To save you some time, the answer is 14 times.
And each of those 14 occurrences is exclusively as a prefix to 'BSD'.
Complaints around how Eric failed to understand the the key element of free software, because he eschewed the word free, and when the subsequent reviewers missed this, should be easy to algorithmically flag.
Why is this generation of IT "professionals" clueless? Why can't they recognize a sound architecture? How to get a clue?
He also humorously quotes Brooks who's always been right and often quoted and basically never implemented outside maybe avionics and nuclear control rooms because "ship it now fix it later maybe" which adds to the irony that the article claims the cathedral ruined software because dogfooding and scratching ones own itch somehow miraculously results in worse software quality than "ship it now fix it later" mentality which surely pragmatically has resulted in low quality trash in the ecosystem.
You have to keep in mind that its from more than half a decade ago which means its a generation (or two) in the industry which makes the "lost generation" talk even more weird and it makes it an interesting alternative reality alternative history from the past, a very interesting enjoyable sci fi read, while it being an alternate reality prediction that didn't pan out doesn't entirely ruin it. The Edwardian Era not having turned out like "War of the Worlds" postulated doesn't imply its bad story.
Alternative histories are cool. I like it.
Yeah, this is hyperbole for dramatic effect. But a bit much no? It suggests the author has been living in one hell of a bubble.
I honestly have no idea what ismeant by that. What background info do O need so that makes sense?
I suspect he's referring to the change that happened in the late 90s as far as the level of quality required to exist in an OS distribution. Linux distributions had a way to bundling every hobbyist project on the planet, with practically no editorial oversight.