Hacker News new | past | comments | ask | show | jobs | submit login
Linus Torvalds: Programs exist for their users (lkml.org)
261 points by CJefferson on March 9, 2012 | hide | past | favorite | 127 comments



  > Programs exist for their users
While this may be inferred from this discussion, this is not what Linus said in this thread. He actually said:

  > The *only* reason for an OS kernel existing in the
  > first place is to serve user-space.
This is a much narrower definition without the philosophical implications of the title.

Sometimes programs don't exist for their users. Take DRM for example - it is explicitly against the user.


The DRM user is the publisher, not the user of the program itself.


I find myself growing tired of our trend of redefining words to make our favorite clichés apply to our favorite companies.

Some examples:

"The customer is always right." -> "You're the product, not the customer."

"Software is made for users." -> "The publisher is the user, not you."

Sure, these things sound clever at first, but they only serve to increase our acceptance of these encroachments upon the rights of consumers and responsibilities of corporations by altering our vocabulary to fit these practices. Let's just stick with the traditional definitions (e.g. user: "the one sitting at the computer") and stop twisting words to accommodate things like DRM and privacy-invasive tracking.


I disagree with both your points.

This isn't a redefinition of words. It's a clarification of the true situation. Again: if you're not paying for the product, and somebody else is, then you're the product, and your use of a system or viewing content, etc., is an intentional objective of the payer. You'd be well advised to be aware of this, because you're being manipulated.

And the fact that someone else is paying for the system doesn't make it right. Just because I'm not paying for a DRM'd product (and very often you do pay) doesn't make it "right". In this case, usually, there are to goods in question: the information good for which you are the customer, that's covered by a DRM "service", for which the publisher is the customer.

There are also goods for which there is no intrinsic monetary market, or for which the market is at best diffuse. Language is one such good (there are very few people whose paycheck is based on maintaining, debugging, and extending the English language, for example). Free Software is another, though there are both paid and unpaid contributors. And you might well ask what the objectives of those who are actively contributing are (propagandists and marketing types influence English, device manufacturers and standards promoters write significant amounts of Free Software).


I see "You're the product, not the customer." and "The publisher is the user, not you." as labels of warnings.

For me that isn't twisting words to accomodate things like DRM and privacy-invasive tracking. If you tried to mask "You're the product, not the customer." as "The customer is always right." and "The publisher is the user, not you." as "Software is made for users." you'd be twisting words.

If I'm the product or the publisher is the user I'm very careful with what I share or do on the service. One of the many reasons for why it wouldn't affect me much at all if google disappeared tomorrow.


Personally I like the trend, though like others I don't call them "redefinitions", it's more of a context switch and applying the phrase in a way its author didn't see. I like it because it takes seemingly clever or seemingly deep or seemingly insightful sentiments like "Programs exist for their users" and reveals their content-free tautological nature. The more tautologies we find the more we can focus on eliminating those from the conversation and talking about things that matter instead. Of course, sometimes the context switch doesn't reveal tautologies but actually harmful sentiments. You don't want a sado-masochist believing in the Golden Rule as a moral guideline for instance.


I disagree. The Publisher in this case is the programmer - or at least distributor - of the program. Content consumers are the intended user.


I have to disagree with you there, clearly the user of DRM is the copyright holder. They are using the tool to perform their activity, that of "safely" leasing their content to consumers.

In the same way, the user of DirectX is the game publisher, not the consumer who buys the game.


I am my DVD player's user. The copyright holder does not come to my house to operate my DVD player. It will not allow me to play discs not encoded for my region. This program is not written for my preference or use, but in opposition to me. It also does not allow me to bypass certain segments of videos. Again - it opposes me.

If it was written for me - the user - it would allow me to bypass all content and jump directly to the first title of every disc I inserted. The producers of this software have explicitly made it hostile to the user. I'm not saying this is not within their rights - I'm just saying it is.

But this is a rat hole. My point was that this is not what Linus was saying. He was merely describing the purpose of one software component. A kernel is a hardware abstraction - a platform for building applications without concern for the details of components - including the version of the kernel itself. He was arguing against breaking that contract by introducing a change which fit the whims of the contributor but would break an unknown number of binaries compiled against that contract.

tl;dr - "user" is not the same as "user-space"


DRM covers an information or entertainment product.

In many or most cases, the end-user is paying for this product.

The publisher is paying for, and benefitting from, the DRM capabilities or service, offered by the DRM system/software.

There are two markets and products at play here.


While I partly agree with with your argument, I disagree with you conclusion.

Linus then goes on to say:

  Even when Linux was young, the whole and
  only point was to make a *usable* system.
Which I think is accurately summed up:

  Linux exists to be used
Which is still suitably philosophical and is easily and acceptably expanded to:

  Programs exist to be used


An easy way to read the whole thread:

http://markmail.org/thread/wwi2aynfiliqanil

I don't understand why Mark Mail doesn't get more love. I wish they had better SEO or something, it's so much easier to read a long thread on their site then most mailing list archives.


I don't like it because it seems to have no threaded view. I really like gmane much more for mailing list discussions - http://thread.gmane.org/gmane.linux.kernel/1245999/focus=126...


Yeah, that is nice. Either one is a vast improvement over old-style list archives.


mbox and your own damned client.


If you happen to already be subscribed to the mailing list in question, sure.


There are list archives which provide mbox format archives. Mailman for example.


Here's a link to the actual message referenced by the submission http://markmail.org/message/d4dfw53uqwhdj24i


They do not have bugtraq or full-disclosure?


Leadership: communicate a clear mission and inspire others to want to do it.

Linus has it.


I would add politeness. Not sure about Linus, though...


Leadership doesn't require politeness. Fairness is probably what you really want.


I think this is a good point. I don't want people to be needlessly polite, I want them to be truthful in a manner that is also respectful.


I agree, fairness over politeness. I would say honesty over politeness as well. That isn't to say you shouldn't be polite; just realize that sometimes your decisions are going to hurt people no matter how nicely you put it.


linus has charisma, which trumps politeness. people love him for his rudeness.


Linus has to his credit the fact that he's less of an a-hole than, say, Theo de Raadt.

Though I'm not so sure I want Nice Guys(tm) writing my software...


But, comparing anyone to de Raadt is easy :D


He's nicer than Jobs, at least.


Granted, Jobs was not always a nice person behind closed doors. However, at least during his Apple 2.0 days, he did NOT trash the people working for him in public the way Linus habitually does.

Another difference is that a lot of people working for Jobs (again, at least during the Apple 2.0 days) made f* you money for their efforts. There are engineers making a living working on Linux, but not on that scale, I think.


He also didn't praise people working for him in public. Or mention people working for him in public. At all. Are you sure you aren't mistaking silence for civility?


Steve Jobs regularily praised the people working for Apple, sometimes as a group, sometimes by name. Watch any Keynote to see that.


Are any of these people working for Linus? Is he paying them? Are they not free to fork off their own kernel tree if they don't like the way he maintains his?

Jobs was running a company, and paying people directly for results..... (all other judgement aside)


Love his swagger!


...which explains why Linux has taken over the desktop market?

Seriously, Linux has had MAJOR advantages, people HATE M$ and the Mac has shown ZERO interest in taking over anything other than the elite market segment.

I tried using Linux for a while, and it is a major pain in the ass. I've been using a Mac for everything since 2009, and it is so much easier. I really miss some Windows apps, so I'm going to have to get another machine just to run those apps, and of course I will install Windows. But there are no killer apps for Linux on the desktop.

Running a server farm? Of course I'll use Linux. No point in using anything else. But on the desktop, what's the point?

If you use Linux, it's like tithing. It's free, but you have to give up 10% of your life just to get by. If you're a sysadmin, that IS your life, and you can kernel hack all day and night. This is why Linux controls the server market.

But Linus has no conception of what the average user wants and needs in an OS. For example: if I'm using Linux, I'd like to be able to seamlessly run lots of Windows software. Linux should have an executive suite for Wine.

The charitable interpretation of his antics is "Lilliputian victim". He sounds like a 13 year old.

On top of that, the vitriol is a waste of breath. The Mac broke binary compatibility two times - Motorolla to PPC and PPC to intel. There were VMs available and fat binary alternatives available for both changes, and Mac users and developers just rolled over.

Linus does some things very well, but I would not describe him as a model leader.


Linux is the kernel. Linus doesn't have pretty much anything to do with the desktop or individual applications (save the occasional rant).

What you're saying is like judging the guy who build the road when it's the car that sucks.


Your comment is a bizarre and rambling non-sequitur. What on earth gave you the idea that winning the desktop battle is Linus' measure of success or leadership? Why does Apple breaking binary compatibility imply that there is no cost in doing so? How can you say Apple is only targeting elite segments when you look at products like the iPad that are selling in incredible numbers and causing low-end market leaders like HP to drop out of the market?

It seems like you have some bone to pick with Linus, because there isn't any real criticism here. Binary compatibility is a very nice feature for Linux users, you haven't written a word about why that is not so.


It's great that Linus is committed to supporting old binaries, too bad all that effort is in vain because of the glibc disaster.

Linux can run statically compiled binaries from 1993, but not the Firefox binaries from 2006.


Yeah, the gnu libc and libstdc++ projects have been particularly bad at maintaining ABI compatibility.


wouldn't it be possible to still run them with some LD_xxx magic where the c++ shared libs are put in some folder and loaded from there? (It's really userspace problem).


Yes - I've done that to produce more portable Linux binary installers where static binaries would have caused other problems - I tested it for backwards compatibility, but hopefully it will also improve forward compatibility.

One problem that you encounter if you do that is that glibc has an option to disable support for older kernels in exchange for better performance, so you lose backwards compatibility unless you are very careful about how you compile glibc (it fails with an error that the kernel is too old).



wait, it's back up. but yeah, this is a mirror.


In my experience binary compatibility on linux is a train wreck. Ever tried to get a binary compiled 5 years ago to run on a fresh install of linux?

Good luck.

If you still have the specific version of every shared library that it loaded you might be able to get it to work.

This isn't the fault of the kernel team though.

It is probably to be expected on platforms in which distributing source code is the default option, but it makes the platform very hostile to closed source applications, particularly games.


You're fumbling over the term "Linux". What you're describing is platform library incompatibility. What Linus is talking about is the kernel ABI. Not the same thing. The latter is a subset of the former, obviously, but Linus can't fix the fact that library authors don't care as much about the problem.

The kernel is doing its job. Userspace, not so much. Though it's not nearly as bad as you think. In general, any desktop application compiled in the last 5 years will run unmodified on any modern distro. Really, it will, and I'd challenge you to find a counterexample.

But I suspect you're talking about the dependency issue. Installing something with a bunch of dependencies (because modern software has a dependency graph that looks a lot like seaweed) requires finding and installing all that stuff on your modern distro. And package names have changed, and some have been dropped from the core distro, etc... And yes, this is a mess.

But seriously: if you have a binary that works alone on, say, Ubuntu Dapper, it almost certainly will run on Fedora 16 or RHEL 6.2


Even if you were to statically compile a software build from 5 years ago, it may require newer kernel features such as inotify (instead of dnotify). At some point of time the application authors (correctly) made a decision to throw out the old and go with the new. This is not a kernel issue (because old systems such as dnotify are still supported). But from a user perspective, it looks like the kernel is to blame, even if this assumption is incorrect.


From my Fedora 16 laptop:

  $ fgrep DNOTIFY /boot/config-3.2.9-1.fc16.x86_64 
  CONFIG_DNOTIFY=y
Obviously it's true that dnotify is the old junk and inotify the new hotness. But it hasn't been abandoned, by either the kernel or the distros. Again, they take stuff like this very seriously. Old junk runs. Really, it does.


> This isn't the fault of the kernel team though.

It is the result of having no unified platform and library release coordination, no long term plans, nothing, just chaos. Everybody just releases when he is in the mood for it. And since nothing is complete when released, devs further down the chain always go for the latest and greatest to get additional functionality. And to upgrade app1, you have to upgrade lib1 which triggers the updates of app2, app3, app4, lib3, etc. Sometimes you cant update a simple app without updating the whole desktop. The Linux dependency net appears nightmarish to everyone coming from Windows, where you have a reliable, stable base system which doesnt change for a decade and every app targets the same base system.

Linux, the kernel, is only running that great because it has a dictator. But Linus' dictatorship ends at the kernel borders, he has little influence outside. The Linux desktop also needs a dictator to massively slow down the rate of uncoordinated changes and force-stabilize the ecosystem. I hoped that Mark Shuttleworth could be that man, but he then introduced Unity... but even with unity, Ubuntu is the only chance for the Linux desktop for having a single defined set of libs attractive and influential enough for app devs as a primary target so they can safely go with the library versions in Ubuntu, instead of constantly chasing the latest and greatest versions from the upstream.


>everyone coming from Windows, where you have a reliable, stable base system which doesnt change for a decade

Now, you are being plain funny. Haven't you heard of 'DLL Hell' in Windows platform?.

You never had 'windows update' break your software for no reason?


You are using this term incorrectly: the cause of DLL Hell is when one DLL (often a newer one) should be used, but another is used instead (even if it is older). The causes of this were varied.

On 16-bit Windows there was a single address space and only a single version of a DLL could be loaded by all processes: the first one to load would "win", and the others would get screwed with the old copy. This was due to 16-bit Windows not having memory protection: it was more of a GUI over DOS, and thereby had cooperative multi-tasking.

Even with that fixed, many developers would require a slightly newer version of a library, and rather than ask the user to upgrade their system (an irritating consequence of not having packages or dependencies; APT FTW ;P) would just include the DLL in their installer and unconditionally overwrite any existing copy.

After the dynamic loader started supporting "local" versions of libraries (installed to the same folder as the application), a similar problem happened with COM objects, which are centrally registered: someone would install their own version of a shared GUI component, register it with the shared name, overwriting possibly-installed newer copies.

Both of these problems were actually solved, but way too many developers simply gave up on Microsoft and Windows beforehand, and then refuse to spend the time to learn about the improvements. In essence, Windows now has reasonable-ish package management, with dependencies: Windows Installer.

In addition to dependencies and versioning of packages (which can then be correctly tracked by Windows, much like APT on a Debian/Ubuntu box), Windows Installer supports the notion of "unified installer program" with "merge modules": in essence, you can include someone else's package inside of your package; that way the dependencies can correctly be checked, and old versions won't get overwritten.

There are still a few cases that are quite difficult to manage (involving libraries that require a modified ABI over time, but still receive updates), and Microsoft's solution to those is WinSxS. Honestly, while I'm much less familiar with it than on Unix, it only seems a constant multiple more crazy than .la files, which I believe solve a similar problem.

These technologies and improvements were all introduced at or before Windows XP, an operating system that was released just over a decade ago. Of course, these are all solutions that developers sometimes ignore, but if you download software for Linux that comes with a .sh installer that scribbles into /lib, you are in for similar "hell".


I've never had to upgrade Windows (including all installed apps) to be able to install some other random app.

On Linux, having to upgrade the distro (including getting a new desktop environment force-installed) to get a new version of any random app is established practice. Example: http://esr.ibiblio.org/?p=3822


If you claim that you "never had to upgrade Windows to be able to install some other random app" you probably haven't used Windows much then. There are games that need explicit version or newer of DirectX. You can't even produce from the C or C++ sources the binary application that runs on any Windows XP with the latest Visual Studio (11). Any C/C++ application built with Visual Studio 2010 won't run on Windows 2000 or XP prior to SP3. We developers try to build the applications that run on as many targets as possible, but even MSFT doesn't support us enough for that, seeing the older versions as the competition to their newest "shiny thing." Which is not funny considering millions and millions of users still running Windows XP.

See the various opinions on MSFT intentionally removing the binary compatibility which already existed in their libraries here:

http://news.ycombinator.com/item?id=3648209


Do you happen to know how many users are running Windows < XP/SP3? For our install base (games company) it's less than 1%. Is there a compelling reason not to upgrade to SP3 if you're on XP?


> Any C/C++ application built with Visual Studio 2010 won't run on Windows 2000 or XP prior to SP3.

By default you can't but you can adjust the build settings and it works fine.


Dependencies are bundled on Windows. It means installing App B doesn't affect your App A. It also means that security bugs must be fixed at the app level, not at the library or OS level.


Then you're lucky, because plenty of applications required you to install the Service Pack 2 on XP to install them, .NET application often required newer versions, same with games and DirectX, etc.


Not entirely true. While Microsoft won't upgrade your Windows for free (creating the demand for support for ancient versions of the OS - things that will run on XP) they will make software dependent on service packs and fixes. It's not a new version with new functionality, but it's an upgrade nonetheless.


Not sure why this was down voted so much, there is an element of truth to it.

When you are dealing with different distributions that provide different versions of core libraries, different sound architectures , package managers and put things like executables and config files into different parts of the filesystem.

I thought that the Linux standard base would be the way to solve this.


Probably because though there is an element of truth in it, it just sounds plain wrong.

After two sentences i thought "he probably works for microsoft or another big corporation", not because it's flame, but because of the attitude to regard this as pure chaos (and having no big plan as bad). I even looked up the profile.

There is kind of a release-plan, not for the whole eco-system, but that's what stable distros are for. So the remark to Ubuntu is right. But it's wrong to mix library-stability with perceived frontend-issues with Unity. Ubuntu still fulfills that role for some apps. And besides that, it isn't necessarily wrong to write new programs against new libraries. They have new features and new bugfixes.

I wouldn't want the ecosystem to stagnate right now. Or ever.


> not for the whole eco-system, but that's what stable distros are for

The problem is that no app dev targets stable distros, but always goes for the latest and greatest from the upstream so distros are constantly forced to update libs and change the base system.

So with a stable distro, you cant get a new version of an app, because the libs of your distro are too old, and you need to upgrade the whole distro just to be able to get that new app you want.

The Linux ecosystem needs one distro (say Ubuntu) to become so influential, that app devs start to primarily target it instead of the upstream. Only then will the library space stop being a moving target for end users, and only then it will be possible to upgrade app1 without triggering an automatic update of app2, which both happen to depend on the same lib. Only then will these useless practices of "packaging" and "backporting" finally stop, and devs will be simply make packages themselves, like they do on windows or osx. Only then will users be able to install a distro once, and then be able to install new apps for 5-10 years without having to upgrade the whole distro every 6 months.

> I wouldn't want the ecosystem to stagnate right now. Or ever.

But with an ecosystem as unstable as the current one you wont get more than 1% of the market right now. Or ever.

Normal users and especially businesses simply dont want to constantly update their systems. Force them to do that, and they simply walk away.


The Linux ecosystem needs one distro (say Ubuntu) to become so influential, that app devs start to primarily target it instead of the upstream. Only then will the library space stop being a moving target for end users, and only then it will be possible to upgrade app1 without triggering an automatic update of app2, which both happen to depend on the same lib. Only then will these useless practices of "packaging" and "backporting" finally stop, and devs will be simply make packages themselves, like they do on windows or osx. Only then will users be able to install a distro once, and then be able to install new apps for 5-10 years without having to upgrade the whole distro every 6 months.

What happens in Windows is not what you're describing; developers are simply forced to distribute their own copies of the libraries (as DLLs or statically compiled) since there is no package manager. What then happens is that there are dozens of copies of the same libraries, most of them lacking bugfixes and even security patches.

The dependency system used by Linux distros may have its problems, but it surely beats ad-hoc dependency management, even if it requires backporting.

Normal users and especially businesses simply dont want to constantly update their systems. Force them to do that, and they simply walk away.

Right. If you use Windows, have you tried counting the number of update managers running in the background, the number of applications that ask you on launch to "verify updates", the number of times Windows Update alerts you, etc?

Windows machines are constantly updating. Unlike Ubuntu, they just do it incrementally instead of once every six months (except for security patches). But that's better fixed by moving to a rolling release scheme.


I would rather ship an extra copy of all of the libraries than tell the user that he has to update libc6 (and therefor almost all of the other software he uses) to run my software.


Nothing in Linux prevents you from doing so. Just ship your application with it and use a one line shell script to run it with the appropriate LD_LIBRARY_PATH.

It's just not commonly done (with exceptions like http://sta.li/), and as a user, I'm thankful for that.


To an extent, they already do.

The bigger development teams don't focus on a single distro, as their development base is probably diverse enough to demand a certain degree of platform compatibility.

However, many of the smaller development teams will focus almost exclusively on Ubuntu, as it is the distro they will most likely have.

I know, from my own limited experience, that I have only ever attempted to make my systems work on Ubuntu and just allowed others to push their changes into the main development if they have specific platform that they prefer to use.


I wouldnt say that's true. LedgerSMB 1.3 when it shipped last year supported PostgreSQL 8.3 and higher, Perl 5.8, etc... 1.4 will probably require Perl 5.10 and PostgreSQL 8.4...... We specifically target older versions to make it easy for adoption.

In fact, usually when we run into problems, it's a new version, not an old one.


> The Linux ecosystem needs one distro (say Ubuntu) to become so influential, that app devs start to primarily target it instead of the upstream.

This used to be RedHat: software for Linux would often come as either 1) source code (if open), 2) a crazy .sh install script, and 3) an RPM package for RedHat 5.


It is now Ubuntu. I think damn near all Linux software I see now has an Ubuntu .deb available.


Apart from Oracle Java it seems.


There is a binary compatibility for kernel. Typically it's just the user-space dynamic libraries (in Windows speak: dlls not belonging to kernel) that you miss if you have a 5 year old binary and it won't run. Linus doesn't control user-space libs, just kernel.

See how jwz shows you how to run a binary of Nestcape from 1995 (17 years old) on the latest Linuxen:

http://www.jwz.org/blog/2008/03/happy-run-some-old-web-brows...

Your major problem if you want to make a closed source application is that the user space dynamic libraries are mostly (L)GPL licensed, so you have to avoid (L)GPL libraries to have a legally fully-library-independent closed source application. LGPL license allows closed source linking only if you link dynamically to the LGPL library (meaning: your application must depend on the externally present library which can be replaced at any time). Which is really fair, IMHO.

http://en.wikipedia.org/wiki/GNU_Lesser_General_Public_Licen...


If you want to be sure it'll work, you compile it statically, and waste a little bit of disk space and memory.


Trading security / bug fix updates in the process



Well, then, perhaps I am just lucky but just a few weeks ago I was able to get a fifteen year old binary running under both centos 6 and fedora 15.

I did have to fish around a bit for an old version of curses and create a symlink. It only took a few moments though.

Just a data point to consider.


If we used static compilation for everything, this wouldn't be a problem. Our problem would then be higher memory consumption, but I can fix that for $50--at least until the next Ubuntu release comes out.

Speaking of that, I wonder if there's some point at which dynamic libraries don't really help you that much. These days, even shared libraries don't seem to stop a GNOME desktop running a browser from sucking down a gig of memory; would it really kill them to have a couple more megs of static library in each program? With copy-on-write, I think things like Chrome which spawn many children would come out pretty well.


One advantage to shared libs/dlls is that a security fix will be applied to all applications that depend on that. So if everyone statically links against insecure libA.a then all apps that depend on it need to ship a patch. Where as if libA is shipped as a shared library then one update needs to be applied. That's one advantage but maybe not the best.


What if we could have opportunistic dynamic linking, where an application uses a shared library, if present and compatible, and otherwise uses the bundled? Seems to get the best of both worlds, no?


Using shared libraries also decreases application startup time, because applications don't need to read as much code from disk.

Even using an SSD I suspect the difference would be noticable for most applications.


What program from five years ago fails to run on debian/stable?


I apologize for asking for specifics...


> Ever tried to get a binary compiled 5 years ago to run on a fresh install of linux?

I run older binaries on Linux all the time.


It makes it hostile to abandonware.


Hostile to people who use their machines for Real Work.


The site is down, the content is (at least for me) not available in google cache, so I suppose it's the same for others as well.

And yet it's the top story on HN right now. When do we start giving tl;dr's for the headlines?

[EDIT] managed to find a mirror. This mail in the thread gives a bit more context: http://lkml.indiana.edu/hypermail/linux/kernel/1203.1/00446....


There's not even that much to TL;DR. Someone proposed a patch that would break binary compatibility. There was some argument about having to keep "30+ years of backwards compatibility."

Linus said the whole thing was dumb, the patch wasn't worth it, and that programs exist for users, so we can't just break things for the hell of it and expect people to use Linux.


you can enable showdead, bravura posted the whole content, but the comment is dead. Not sure why

edit: also ttt_ and father posted it and got auto killed.


Up here (and since you posted 0minutes ago, I imagine a refresh or two would fix it for you)


Linus is very consistent when it comes to these things. A similar thread from a while back: http://news.ycombinator.com/item?id=2372096


In contrast, here is a thread about Debian, after much deliberation, unanimously choosing to break kernel ABI compatibility with VMware. They didn't want to increase the ABI number during a "freeze", even though they broke the ABI.

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=607368


That's the difference between the user-space interface, and the kernel ABI. The Linux kernel ABI has no compatibility guarantees. If you are linking to the kernel, you are considered to be part of the kernel, and thus you are expected to keep abreast of any ABI changes (preferably by getting your module into the kernel tree, so that anyone changing the ABI can fix your code too).

The Debian kernel team tries to do a reasonable job of tracking kernel ABI compatibility changes, and updating the number when the ABI does change, to avoid having to recompile and reinstall everything for every patch to the stable tree. In this case, they decided that the ABI change was only intended for a single, in-tree module (KVM), and that they didn't have to increment the ABI number for a change that shouldn't affect anything else.

This is really just an example of the Linux kernel developers' two approaches to compatibility. For the kernel ABI, they make no guarantees whatsoever about compatibility. For the user space interface, they are supposed to never, ever change the interface in ways that will break existing programs, though there are sometimes disagreements about what precisely constitutes this interface and what is outside the bounds of it.


While I disagree, in the end, about breaking user applications, I can appreciate their situation. They have an upstream kernel from which they get bugfixes and patches, but which has no guarantees of stability. Any guarantees that the Debian team decides to make are their own responsibility, and could result in them maintaining multiple patches just to fix ever-more-diverging upstream releases.


Slightly off-topic, but often when reading these mailing lists I find some developers to be rather unhelpful, if not downright nasty.

Take this example. Now I realise this isn't a company-client relationship we're looking at, but am I alone in thinking a bit more diplomacy wouldn't have gone amiss?


I got a good laugh from this quote: "[Linux] is not some crazy drug-induced microkernel"


That sentence is a betrayal of UNIX's history:

There are two major products that came out of Berkeley: LSD and UNIX. We don't believe this to be a coincidence. (Jeremy S. Anderson)


I think it rather refers to MINIX.


More specifically, it most likely refers to this discussion:

http://groups.google.com/group/comp.os.minix/browse_frm/thre...

Linus was a kid in 1992, and got scolded by one of the greatest icons of the field. That could still sting 20 years later.


I always thought Linus gave better than he got in that argument, myself. He was pretty brutal to Minix in a couple of places. BTW. love the reference to "BSD detox" somewhere in that thread, it always gives me a chuckle.


Yeah GP seems to imply Linus lost that debate. Quite the contrary, Tannenbaum titled his post "LINUX is obsolete", so the argument was fundamentally about whether he was right about his title, or whether he was wrong - the last two decades of history have pretty clearly proven him wrong.


[Regarding Linux's x86 base] What is going to happen is that they will gradually take over from the 80x86 line. ... I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long.

Ouch, history hurts.


Yes, it refers to MINIX.

However, it is a betrayal of UNIX's history.


More likely HURD


No, it was tanenbaum and minix.


When did Unix stop being a product of Bell Labs?


In 1916.

That was when AT&T first entered into anti-trust restrictions as a result of its anticomptitive practices in the long-distance telephone market, resulting among other things a 1958 DoJ consent decree preventing the company from marketing computer systems, which meant that in 1969, when Ritchie and Thompson wrote UNIX while at AT&T the company couldn't sell the software, and gave it away (with love, from Ken), resulting in most effective development moving outside the company by the mid/late 1970s (notably to UC Berkeley and MIT), a fact ultimately recognized by AT&T when it sold UNIX to Novell, who transferred the official UNIX trade mark to The Open Group in 1994.


Linux is not Unix?


No, Unix is a trademark. You can call your unix-like OS Unix only if you conform to SUS or something like that.

GNU is definitely not Unix.


One wonders whey they left out Calvinism and the transistor...


All these years later, the flame goes on :p


It's like reading a Plato book where Socrates is bullying the guests :-)



Can anybody explain what this is approximately about?


Well, the first guy is submitting a patch to the linux kernel source that would make program binaries (executable files, sort of) compiled for an older version of the kernel break. This same guy looks like he is about to argue that it is not worth to maintain compatibility with 30 year old binaries.

And then Linus (the guy behind Linux) goes on to explain that the OS should serve its users, and that keeping compatibility with existing programs, no matter how old they are is of utmost importance for users to be able to use that system.

I don't know if I helped, or if I addressed your doubts. I hope so :)


the full context of the quote provoking Linus's rant is extremely relevant here:

  The current counting that we do gives the wrong numbers, in the
  edge cases.  To my knowledge a deleted sysfs directory has never
  returned nlink == 0.

  Keeping compatibility is easy enough that it looks like it is worth
  doing, but maintaining 30+ years of backwards compatibility is what
  nlink >1 in unix filesystem directories is.  I don't see any practical
  sense in keeping . and .. directories on disk or upping the unix
  nlink directory count because of them.  To me it looks like just one
  of those things you do.  Like hash directory entries so you can
  have a big directory and still be able to have a 32bit offset you
  can pass to lseek that is stable across renames and deletes.
to use PG's terms, Linus is arguing against this at DH0 or maybe DH1 here[1]. Maybe the above argument sucks, but Linus didn't refute it at all.

[1] http://www.paulgraham.com/disagree.html


I disagree. He refuted the premise that breaking backward compatibility is OK under certain conditions, because "the only reason for an OS kernel existing in the first place is to serve user-space." That alone refutes the whole argument.


Come on people the future is node.js and MongoDB. WE DON'T NEED KERNELS!!! Async programming IS JUST SO MUCH FASTER!!!


This is straight out of the original Tron!


I reckon that "drug induced microkernel" was a stab at Tanenbaum :-)


Pretty stupid stab, to be honest. Every major desktop/workstation operating system uses microkernel or hybrid architecture (even NT, yeah, that's true; I know that Linux thinks hybrid is another word for macro-). And it is like this for some purpose. To my best knowledge being macrokernel makes Linux "huge and bloated" like someone has once said.


NT puts third-party written graphics drivers into Ring 0. That's definitely not a microkernel. As for whether or not hybrid is another word for Macro, whatever. If that's a face-saving way for microkernel advocates to avoid admitting that their original idea was insane, I'm fine with that.

Linus's quote was "drug induced microkernel", however. It wasn't "drug induced microkernel or hybrid architecture" --- although if you run Windows or are forced by a family member to be a Windows support desk, you have my pity....


> forced by a family member to be a Windows support desk, you have my pity....

Thank you - it's nice to know there are people out there thinking about us :(


http://en.wikipedia.org/wiki/Microkernel search Windows or NT. OS X, Linux and NT don't use microkernels.


NT and OS X use hybrid kernels (as well as Plan9). Linus believe that hybrid is another term for monolithic, but rest of the world does not.

AFAIR micro- ones are used by QNX and Minix. Monolithic kernels are used by Linux, *BSD (with an exception for Dragonfly, which uses hybrid kernel), Solaris, AIX(?) and more SysV descendants.


< NT and OS X use hybrid kernels (as well as Plan9). Linus believe that hybrid is another term for monolithic, but rest of the world does not.

Count me out of 'the rest of the world' then. Perhaps you can point me to the what part of NT which would make it a hybrid kernel as opposed to Linux. I've never seen any explanation of this.


http://en.wikipedia.org/wiki/Hybrid_kernel No, Windows and OS X use hybrid kernels.


It could be the hurd project. He does have that semi-famous "just say NO TO DRUGS, and maybe you won't end up like the Hurd people" quote.


No it was an obvious (and in my opinion childish) stab at Minix/Tanenbaum (although Tanenbaum sure is just as childish) given that Tanenbaum is still saying that Linux and monolithic kernels are bad, Stallman on the other hand openly admitted his mistake in going with the mach microkernel. The Linus/Tanenbaum adversity is as alive as ever judging by comments like this and the recent-ish interview with Tanenbaum : http://linuxfr.org/nodes/88229/comments/1291183 where his comments on Linux and it's success is very telling.


It sounded to me more like a simple reference to re-enforce that this is not about theoretical stuff, it's about a piece of software actually being used all over the place.


that guy is such a turd


<3




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: