I find myself growing tired of our trend of redefining words to make our favorite clichés apply to our favorite companies.
Some examples:
"The customer is always right." -> "You're the product, not the customer."
"Software is made for users." -> "The publisher is the user, not you."
Sure, these things sound clever at first, but they only serve to increase our acceptance of these encroachments upon the rights of consumers and responsibilities of corporations by altering our vocabulary to fit these practices. Let's just stick with the traditional definitions (e.g. user: "the one sitting at the computer") and stop twisting words to accommodate things like DRM and privacy-invasive tracking.
This isn't a redefinition of words. It's a clarification of the true situation. Again: if you're not paying for the product, and somebody else is, then you're the product, and your use of a system or viewing content, etc., is an intentional objective of the payer. You'd be well advised to be aware of this, because you're being manipulated.
And the fact that someone else is paying for the system doesn't make it right. Just because I'm not paying for a DRM'd product (and very often you do pay) doesn't make it "right". In this case, usually, there are to goods in question: the information good for which you are the customer, that's covered by a DRM "service", for which the publisher is the customer.
There are also goods for which there is no intrinsic monetary market, or for which the market is at best diffuse. Language is one such good (there are very few people whose paycheck is based on maintaining, debugging, and extending the English language, for example). Free Software is another, though there are both paid and unpaid contributors. And you might well ask what the objectives of those who are actively contributing are (propagandists and marketing types influence English, device manufacturers and standards promoters write significant amounts of Free Software).
I see "You're the product, not the customer." and "The publisher is the user, not you." as labels of warnings.
For me that isn't twisting words to accomodate things like DRM and privacy-invasive tracking. If you tried to mask "You're the product, not the customer." as "The customer is always right." and "The publisher is the user, not you." as "Software is made for users." you'd be twisting words.
If I'm the product or the publisher is the user I'm very careful with what I share or do on the service. One of the many reasons for why it wouldn't affect me much at all if google disappeared tomorrow.
Personally I like the trend, though like others I don't call them "redefinitions", it's more of a context switch and applying the phrase in a way its author didn't see. I like it because it takes seemingly clever or seemingly deep or seemingly insightful sentiments like "Programs exist for their users" and reveals their content-free tautological nature. The more tautologies we find the more we can focus on eliminating those from the conversation and talking about things that matter instead. Of course, sometimes the context switch doesn't reveal tautologies but actually harmful sentiments. You don't want a sado-masochist believing in the Golden Rule as a moral guideline for instance.
I have to disagree with you there, clearly the user of DRM is the copyright holder. They are using the tool to perform their activity, that of "safely" leasing their content to consumers.
In the same way, the user of DirectX is the game publisher, not the consumer who buys the game.
I am my DVD player's user. The copyright holder does not come to my house to operate my DVD player. It will not allow me to play discs not encoded for my region. This program is not written for my preference or use, but in opposition to me. It also does not allow me to bypass certain segments of videos. Again - it opposes me.
If it was written for me - the user - it would allow me to bypass all content and jump directly to the first title of every disc I inserted. The producers of this software have explicitly made it hostile to the user. I'm not saying this is not within their rights - I'm just saying it is.
But this is a rat hole. My point was that this is not what Linus was saying. He was merely describing the purpose of one software component. A kernel is a hardware abstraction - a platform for building applications without concern for the details of components - including the version of the kernel itself. He was arguing against breaking that contract by introducing a change which fit the whims of the contributor but would break an unknown number of binaries compiled against that contract.
I don't understand why Mark Mail doesn't get more love. I wish they had better SEO or something, it's so much easier to read a long thread on their site then most mailing list archives.
I agree, fairness over politeness. I would say honesty over politeness as well. That isn't to say you shouldn't be polite; just realize that sometimes your decisions are going to hurt people no matter how nicely you put it.
Granted, Jobs was not always a nice person behind closed doors. However, at least during his Apple 2.0 days, he did NOT trash the people working for him in public the way Linus habitually does.
Another difference is that a lot of people working for Jobs (again, at least during the Apple 2.0 days) made f* you money for their efforts. There are engineers making a living working on Linux, but not on that scale, I think.
He also didn't praise people working for him in public. Or mention people working for him in public. At all. Are you sure you aren't mistaking silence for civility?
Are any of these people working for Linus? Is he paying them?
Are they not free to fork off their own kernel tree if they don't like the way he maintains his?
Jobs was running a company, and paying people directly for results..... (all other judgement aside)
...which explains why Linux has taken over the desktop market?
Seriously, Linux has had MAJOR advantages, people HATE M$ and the Mac has shown ZERO interest in taking over anything other than the elite market segment.
I tried using Linux for a while, and it is a major pain in the ass. I've been using a Mac for everything since 2009, and it is so much easier. I really miss some Windows apps, so I'm going to have to get another machine just to run those apps, and of course I will install Windows. But there are no killer apps for Linux on the desktop.
Running a server farm? Of course I'll use Linux. No point in using anything else. But on the desktop, what's the point?
If you use Linux, it's like tithing. It's free, but you have to give up 10% of your life just to get by. If you're a sysadmin, that IS your life, and you can kernel hack all day and night. This is why Linux controls the server market.
But Linus has no conception of what the average user wants and needs in an OS. For example: if I'm using Linux, I'd like to be able to seamlessly run lots of Windows software. Linux should have an executive suite for Wine.
The charitable interpretation of his antics is "Lilliputian victim". He sounds like a 13 year old.
On top of that, the vitriol is a waste of breath. The Mac broke binary compatibility two times - Motorolla to PPC and PPC to intel. There were VMs available and fat binary alternatives available for both changes, and Mac users and developers just rolled over.
Linus does some things very well, but I would not describe him as a model leader.
Your comment is a bizarre and rambling non-sequitur. What on earth gave you the idea that winning the desktop battle is Linus' measure of success or leadership? Why does Apple breaking binary compatibility imply that there is no cost in doing so? How can you say Apple is only targeting elite segments when you look at products like the iPad that are selling in incredible numbers and causing low-end market leaders like HP to drop out of the market?
It seems like you have some bone to pick with Linus, because there isn't any real criticism here. Binary compatibility is a very nice feature for Linux users, you haven't written a word about why that is not so.
wouldn't it be possible to still run them with some LD_xxx magic where the c++ shared libs are put in some folder and loaded from there? (It's really userspace problem).
Yes - I've done that to produce more portable Linux binary installers where static binaries would have caused other problems - I tested it for backwards compatibility, but hopefully it will also improve forward compatibility.
One problem that you encounter if you do that is that glibc has an option to disable support for older kernels in exchange for better performance, so you lose backwards compatibility unless you are very careful about how you compile glibc (it fails with an error that the kernel is too old).
In my experience binary compatibility on linux is a train wreck. Ever tried to get a binary compiled 5 years ago to run on a fresh install of linux?
Good luck.
If you still have the specific version of every shared library that it loaded you might be able to get it to work.
This isn't the fault of the kernel team though.
It is probably to be expected on platforms in which distributing source code is the default option, but it makes the platform very hostile to closed source applications, particularly games.
You're fumbling over the term "Linux". What you're describing is platform library incompatibility. What Linus is talking about is the kernel ABI. Not the same thing. The latter is a subset of the former, obviously, but Linus can't fix the fact that library authors don't care as much about the problem.
The kernel is doing its job. Userspace, not so much. Though it's not nearly as bad as you think. In general, any desktop application compiled in the last 5 years will run unmodified on any modern distro. Really, it will, and I'd challenge you to find a counterexample.
But I suspect you're talking about the dependency issue. Installing something with a bunch of dependencies (because modern software has a dependency graph that looks a lot like seaweed) requires finding and installing all that stuff on your modern distro. And package names have changed, and some have been dropped from the core distro, etc... And yes, this is a mess.
But seriously: if you have a binary that works alone on, say, Ubuntu Dapper, it almost certainly will run on Fedora 16 or RHEL 6.2
Even if you were to statically compile a software build from 5 years ago, it may require newer kernel features such as inotify (instead of dnotify). At some point of time the application authors (correctly) made a decision to throw out the old and go with the new. This is not a kernel issue (because old systems such as dnotify are still supported). But from a user perspective, it looks like the kernel is to blame, even if this assumption is incorrect.
Obviously it's true that dnotify is the old junk and inotify the new hotness. But it hasn't been abandoned, by either the kernel or the distros. Again, they take stuff like this very seriously. Old junk runs. Really, it does.
It is the result of having no unified platform and library release coordination, no long term plans, nothing, just chaos. Everybody just releases when he is in the mood for it. And since nothing is complete when released, devs further down the chain always go for the latest and greatest to get additional functionality. And to upgrade app1, you have to upgrade lib1 which triggers the updates of app2, app3, app4, lib3, etc. Sometimes you cant update a simple app without updating the whole desktop. The Linux dependency net appears nightmarish to everyone coming from Windows, where you have a reliable, stable base system which doesnt change for a decade and every app targets the same base system.
Linux, the kernel, is only running that great because it has a dictator. But Linus' dictatorship ends at the kernel borders, he has little influence outside. The Linux desktop also needs a dictator to massively slow down the rate of uncoordinated changes and force-stabilize the ecosystem. I hoped that Mark Shuttleworth could be that man, but he then introduced Unity... but even with unity, Ubuntu is the only chance for the Linux desktop for having a single defined set of libs attractive and influential enough for app devs as a primary target so they can safely go with the library versions in Ubuntu, instead of constantly chasing the latest and greatest versions from the upstream.
You are using this term incorrectly: the cause of DLL Hell is when one DLL (often a newer one) should be used, but another is used instead (even if it is older). The causes of this were varied.
On 16-bit Windows there was a single address space and only a single version of a DLL could be loaded by all processes: the first one to load would "win", and the others would get screwed with the old copy. This was due to 16-bit Windows not having memory protection: it was more of a GUI over DOS, and thereby had cooperative multi-tasking.
Even with that fixed, many developers would require a slightly newer version of a library, and rather than ask the user to upgrade their system (an irritating consequence of not having packages or dependencies; APT FTW ;P) would just include the DLL in their installer and unconditionally overwrite any existing copy.
After the dynamic loader started supporting "local" versions of libraries (installed to the same folder as the application), a similar problem happened with COM objects, which are centrally registered: someone would install their own version of a shared GUI component, register it with the shared name, overwriting possibly-installed newer copies.
Both of these problems were actually solved, but way too many developers simply gave up on Microsoft and Windows beforehand, and then refuse to spend the time to learn about the improvements. In essence, Windows now has reasonable-ish package management, with dependencies: Windows Installer.
In addition to dependencies and versioning of packages (which can then be correctly tracked by Windows, much like APT on a Debian/Ubuntu box), Windows Installer supports the notion of "unified installer program" with "merge modules": in essence, you can include someone else's package inside of your package; that way the dependencies can correctly be checked, and old versions won't get overwritten.
There are still a few cases that are quite difficult to manage (involving libraries that require a modified ABI over time, but still receive updates), and Microsoft's solution to those is WinSxS. Honestly, while I'm much less familiar with it than on Unix, it only seems a constant multiple more crazy than .la files, which I believe solve a similar problem.
These technologies and improvements were all introduced at or before Windows XP, an operating system that was released just over a decade ago. Of course, these are all solutions that developers sometimes ignore, but if you download software for Linux that comes with a .sh installer that scribbles into /lib, you are in for similar "hell".
I've never had to upgrade Windows (including all installed apps) to be able to install some other random app.
On Linux, having to upgrade the distro (including getting a new desktop environment force-installed) to get a new version of any random app is established practice. Example: http://esr.ibiblio.org/?p=3822
If you claim that you "never had to upgrade Windows to be able to install some other random app" you probably haven't used Windows much then. There are games that need explicit version or newer of DirectX. You can't even produce from the C or C++ sources the binary application that runs on any Windows XP with the latest Visual Studio (11). Any C/C++ application built with Visual Studio 2010 won't run on Windows 2000 or XP prior to SP3. We developers try to build the applications that run on as many targets as possible, but even MSFT doesn't support us enough for that, seeing the older versions as the competition to their newest "shiny thing." Which is not funny considering millions and millions of users still running Windows XP.
See the various opinions on MSFT intentionally removing the binary compatibility which already existed in their libraries here:
Do you happen to know how many users are running Windows < XP/SP3? For our install base (games company) it's less than 1%. Is there a compelling reason not to upgrade to SP3 if you're on XP?
Dependencies are bundled on Windows. It means installing App B doesn't affect your App A. It also means that security bugs must be fixed at the app level, not at the library or OS level.
Then you're lucky, because plenty of applications required you to install the Service Pack 2 on XP to install them, .NET application often required newer versions, same with games and DirectX, etc.
Not entirely true. While Microsoft won't upgrade your Windows for free (creating the demand for support for ancient versions of the OS - things that will run on XP) they will make software dependent on service packs and fixes. It's not a new version with new functionality, but it's an upgrade nonetheless.
Not sure why this was down voted so much, there is an element of truth to it.
When you are dealing with different distributions that provide different versions of core libraries, different sound architectures , package managers and put things like executables and config files into different parts of the filesystem.
I thought that the Linux standard base would be the way to solve this.
Probably because though there is an element of truth in it, it just sounds plain wrong.
After two sentences i thought "he probably works for microsoft or another big corporation", not because it's flame, but because of the attitude to regard this as pure chaos (and having no big plan as bad). I even looked up the profile.
There is kind of a release-plan, not for the whole eco-system, but that's what stable distros are for. So the remark to Ubuntu is right. But it's wrong to mix library-stability with perceived frontend-issues with Unity. Ubuntu still fulfills that role for some apps. And besides that, it isn't necessarily wrong to write new programs against new libraries. They have new features and new bugfixes.
I wouldn't want the ecosystem to stagnate right now. Or ever.
> not for the whole eco-system, but that's what stable distros are for
The problem is that no app dev targets stable distros, but always goes for the latest and greatest from the upstream so distros are constantly forced to update libs and change the base system.
So with a stable distro, you cant get a new version of an app, because the libs of your distro are too old, and you need to upgrade the whole distro just to be able to get that new app you want.
The Linux ecosystem needs one distro (say Ubuntu) to become so influential, that app devs start to primarily target it instead of the upstream. Only then will the library space stop being a moving target for end users, and only then it will be possible to upgrade app1 without triggering an automatic update of app2, which both happen to depend on the same lib. Only then will these useless practices of "packaging" and "backporting" finally stop, and devs will be simply make packages themselves, like they do on windows or osx. Only then will users be able to install a distro once, and then be able to install new apps for 5-10 years without having to upgrade the whole distro every 6 months.
> I wouldn't want the ecosystem to stagnate right now. Or ever.
But with an ecosystem as unstable as the current one you wont get more than 1% of the market right now. Or ever.
Normal users and especially businesses simply dont want to constantly update their systems. Force them to do that, and they simply walk away.
The Linux ecosystem needs one distro (say Ubuntu) to become so influential, that app devs start to primarily target it instead of the upstream. Only then will the library space stop being a moving target for end users, and only then it will be possible to upgrade app1 without triggering an automatic update of app2, which both happen to depend on the same lib. Only then will these useless practices of "packaging" and "backporting" finally stop, and devs will be simply make packages themselves, like they do on windows or osx. Only then will users be able to install a distro once, and then be able to install new apps for 5-10 years without having to upgrade the whole distro every 6 months.
What happens in Windows is not what you're describing; developers are simply forced to distribute their own copies of the libraries (as DLLs or statically compiled) since there is no package manager. What then happens is that there are dozens of copies of the same libraries, most of them lacking bugfixes and even security patches.
The dependency system used by Linux distros may have its problems, but it surely beats ad-hoc dependency management, even if it requires backporting.
Normal users and especially businesses simply dont want to constantly update their systems. Force them to do that, and they simply walk away.
Right. If you use Windows, have you tried counting the number of update managers running in the background, the number of applications that ask you on launch to "verify updates", the number of times Windows Update alerts you, etc?
Windows machines are constantly updating. Unlike Ubuntu, they just do it incrementally instead of once every six months (except for security patches).
But that's better fixed by moving to a rolling release scheme.
I would rather ship an extra copy of all of the libraries than tell the user that he has to update libc6 (and therefor almost all of the other software he uses) to run my software.
Nothing in Linux prevents you from doing so. Just ship your application with it and use a one line shell script to run it with the appropriate LD_LIBRARY_PATH.
It's just not commonly done (with exceptions like http://sta.li/), and as a user, I'm thankful for that.
The bigger development teams don't focus on a single distro, as their development base is probably diverse enough to demand a certain degree of platform compatibility.
However, many of the smaller development teams will focus almost exclusively on Ubuntu, as it is the distro they will most likely have.
I know, from my own limited experience, that I have only ever attempted to make my systems work on Ubuntu and just allowed others to push their changes into the main development if they have specific platform that they prefer to use.
I wouldnt say that's true. LedgerSMB 1.3 when it shipped last year supported PostgreSQL 8.3 and higher, Perl 5.8, etc... 1.4 will probably require Perl 5.10 and PostgreSQL 8.4...... We specifically target older versions to make it easy for adoption.
In fact, usually when we run into problems, it's a new version, not an old one.
> The Linux ecosystem needs one distro (say Ubuntu) to become so influential, that app devs start to primarily target it instead of the upstream.
This used to be RedHat: software for Linux would often come as either 1) source code (if open), 2) a crazy .sh install script, and 3) an RPM package for RedHat 5.
There is a binary compatibility for kernel. Typically it's just the user-space dynamic libraries (in Windows speak: dlls not belonging to kernel) that you miss if you have a 5 year old binary and it won't run. Linus doesn't control user-space libs, just kernel.
See how jwz shows you how to run a binary of Nestcape from 1995 (17 years old) on the latest Linuxen:
Your major problem if you want to make a closed source application is that the user space dynamic libraries are mostly (L)GPL licensed, so you have to avoid (L)GPL libraries to have a legally fully-library-independent closed source application. LGPL license allows closed source linking only if you link dynamically to the LGPL library (meaning: your application must depend on the externally present library which can be replaced at any time). Which is really fair, IMHO.
If we used static compilation for everything, this wouldn't be a problem. Our problem would then be higher memory consumption, but I can fix that for $50--at least until the next Ubuntu release comes out.
Speaking of that, I wonder if there's some point at which dynamic libraries don't really help you that much. These days, even shared libraries don't seem to stop a GNOME desktop running a browser from sucking down a gig of memory; would it really kill them to have a couple more megs of static library in each program? With copy-on-write, I think things like Chrome which spawn many children would come out pretty well.
One advantage to shared libs/dlls is that a security fix will be applied to all applications that depend on that. So if everyone statically links against insecure libA.a then all apps that depend on it need to ship a patch. Where as if libA is shipped as a shared library then one update needs to be applied. That's one advantage but maybe not the best.
What if we could have opportunistic dynamic linking, where an application uses a shared library, if present and compatible, and otherwise uses the bundled?
Seems to get the best of both worlds, no?
There's not even that much to TL;DR. Someone proposed a patch that would break binary compatibility. There was some argument about having to keep "30+ years of backwards compatibility."
Linus said the whole thing was dumb, the patch wasn't worth it, and that programs exist for users, so we can't just break things for the hell of it and expect people to use Linux.
In contrast, here is a thread about Debian, after much deliberation, unanimously choosing to break kernel ABI compatibility with VMware. They didn't want to increase the ABI number during a "freeze", even though they broke the ABI.
That's the difference between the user-space interface, and the kernel ABI. The Linux kernel ABI has no compatibility guarantees. If you are linking to the kernel, you are considered to be part of the kernel, and thus you are expected to keep abreast of any ABI changes (preferably by getting your module into the kernel tree, so that anyone changing the ABI can fix your code too).
The Debian kernel team tries to do a reasonable job of tracking kernel ABI compatibility changes, and updating the number when the ABI does change, to avoid having to recompile and reinstall everything for every patch to the stable tree. In this case, they decided that the ABI change was only intended for a single, in-tree module (KVM), and that they didn't have to increment the ABI number for a change that shouldn't affect anything else.
This is really just an example of the Linux kernel developers' two approaches to compatibility. For the kernel ABI, they make no guarantees whatsoever about compatibility. For the user space interface, they are supposed to never, ever change the interface in ways that will break existing programs, though there are sometimes disagreements about what precisely constitutes this interface and what is outside the bounds of it.
While I disagree, in the end, about breaking user applications, I can appreciate their situation. They have an upstream kernel from which they get bugfixes and patches, but which has no guarantees of stability. Any guarantees that the Debian team decides to make are their own responsibility, and could result in them maintaining multiple patches just to fix ever-more-diverging upstream releases.
Slightly off-topic, but often when reading these mailing lists I find some developers to be rather unhelpful, if not downright nasty.
Take this example. Now I realise this isn't a company-client relationship we're looking at, but am I alone in thinking a bit more diplomacy wouldn't have gone amiss?
I always thought Linus gave better than he got in that argument, myself. He was pretty brutal to Minix in a couple of places. BTW. love the reference to "BSD detox" somewhere in that thread, it always gives me a chuckle.
Yeah GP seems to imply Linus lost that debate. Quite the contrary, Tannenbaum titled his post "LINUX is obsolete", so the argument was fundamentally about whether he was right about his title, or whether he was wrong - the last two decades of history have pretty clearly proven him wrong.
[Regarding Linux's x86 base]
What is going to happen is that they will gradually take over from the 80x86 line. ... I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long.
That was when AT&T first entered into anti-trust restrictions as a result of its anticomptitive practices in the long-distance telephone market, resulting among other things a 1958 DoJ consent decree preventing the company from marketing computer systems, which meant that in 1969, when Ritchie and Thompson wrote UNIX while at AT&T the company couldn't sell the software, and gave it away (with love, from Ken), resulting in most effective development moving outside the company by the mid/late 1970s (notably to UC Berkeley and MIT), a fact ultimately recognized by AT&T when it sold UNIX to Novell, who transferred the official UNIX trade mark to The Open Group in 1994.
Well, the first guy is submitting a patch to the linux kernel source that would make program binaries (executable files, sort of) compiled for an older version of the kernel break. This same guy looks like he is about to argue that it is not worth to maintain compatibility with 30 year old binaries.
And then Linus (the guy behind Linux) goes on to explain that the OS should serve its users, and that keeping compatibility with existing programs, no matter how old they are is of utmost importance for users to be able to use that system.
I don't know if I helped, or if I addressed your doubts. I hope so :)
the full context of the quote provoking Linus's rant is extremely relevant here:
The current counting that we do gives the wrong numbers, in the
edge cases. To my knowledge a deleted sysfs directory has never
returned nlink == 0.
Keeping compatibility is easy enough that it looks like it is worth
doing, but maintaining 30+ years of backwards compatibility is what
nlink >1 in unix filesystem directories is. I don't see any practical
sense in keeping . and .. directories on disk or upping the unix
nlink directory count because of them. To me it looks like just one
of those things you do. Like hash directory entries so you can
have a big directory and still be able to have a 32bit offset you
can pass to lseek that is stable across renames and deletes.
to use PG's terms, Linus is arguing against this at DH0 or maybe DH1 here[1]. Maybe the above argument sucks, but Linus didn't refute it at all.
I disagree. He refuted the premise that breaking backward compatibility is OK under certain conditions, because "the only reason for an OS kernel existing in the first place is to serve user-space." That alone refutes the whole argument.
Pretty stupid stab, to be honest. Every major desktop/workstation operating system uses microkernel or hybrid architecture (even NT, yeah, that's true; I know that Linux thinks hybrid is another word for macro-). And it is like this for some purpose.
To my best knowledge being macrokernel makes Linux "huge and bloated" like someone has once said.
NT puts third-party written graphics drivers into Ring 0. That's definitely not a microkernel. As for whether or not hybrid is another word for Macro, whatever. If that's a face-saving way for microkernel advocates to avoid admitting that their original idea was insane, I'm fine with that.
Linus's quote was "drug induced microkernel", however. It wasn't "drug induced microkernel or hybrid architecture" --- although if you run Windows or are forced by a family member to be a Windows support desk, you have my pity....
NT and OS X use hybrid kernels (as well as Plan9). Linus believe that hybrid is another term for monolithic, but rest of the world does not.
AFAIR micro- ones are used by QNX and Minix. Monolithic kernels are used by Linux, *BSD (with an exception for Dragonfly, which uses hybrid kernel), Solaris, AIX(?) and more SysV descendants.
< NT and OS X use hybrid kernels (as well as Plan9). Linus believe that hybrid is another term for monolithic, but rest of the world does not.
Count me out of 'the rest of the world' then. Perhaps you can point me to the what part of NT which would make it a hybrid kernel as opposed to Linux. I've never seen any explanation of this.
No it was an obvious (and in my opinion childish) stab at Minix/Tanenbaum (although Tanenbaum sure is just as childish) given that Tanenbaum is still saying that Linux and monolithic kernels are bad, Stallman on the other hand openly admitted his mistake in going with the mach microkernel. The Linus/Tanenbaum adversity is as alive as ever judging by comments like this and the recent-ish interview with Tanenbaum : http://linuxfr.org/nodes/88229/comments/1291183 where his comments on Linux and it's success is very telling.
It sounded to me more like a simple reference to re-enforce that this is not about theoretical stuff, it's about a piece of software actually being used all over the place.
Sometimes programs don't exist for their users. Take DRM for example - it is explicitly against the user.