Hacker News new | past | comments | ask | show | jobs | submit login
Linux is Obsolete (1992) (groups.google.com)
117 points by tim_sw on June 12, 2018 | hide | past | favorite | 168 comments



Though history seems to prove him wrong, Tanenbaum wasn't incorrect.

Linux to me is an example of the wrong thing at the right time. Linux embracing Posix and fully supporting Intel gave programmers exactly what they wanted right at the nascency of the dotcom boom. Apache, MySQL and PHP needed a home that wasn't Windows ... and Linux provided that home on the best dollar-for-dollar processors on the planet.

If Motorola or IBM had challenged Intel with cheap, better processors then history might have looked different because Linux wouldn't have been a solution on those platforms.

Linux works well enough I suppose, but the fact that we have this gigantic mess, where we run an OS, and then run a hypervisor on it, and then run more copies of an OS, and then finally get to run our application code, tells me something is quite wrong.

The fact that unikernels are even a thing tells me that people DO feel the pain of monokernels and yearn for something simpler. Securing a monokernel like Linux, even one so open and scrutinized, is really, really hard. There is just so much surface area in the kernel to attack. Everything is so heavyweight.

I think Tannenbaum was right, but he just wasn't enough of a hacker to get things done in time to catch the wave that the dotcom boom created.

To me the dream would be something like Google's new Fuschia OS where everything is capability-based so security is built into the OS concept from the very beginning.


What is more interesting is that while Linux might be a mess, so is everything else. Neither apple nor Microsoft or IBM managed to put something significantly better out-there at a price that was worth it.

One could consider OpenSolaris as something that could have filled that space in alternative future.

Check out Redox OS, it has many cool concept and is written in Rust.


In many ways Windows is better designed than Linux - although it has its own problems to weigh it down.


Windows has a lot different designs. You can tell when certain API for instance were introduced by how they are structured. The design of the internals is not as clear. It's not open source.


so is everything else.

Amen to that. Nature's biological organisms grow slowly, as does real knowledge. Poor nutritional choices lead to poor health. Natural ecology has adapted over 4 billion years. Hasty applications are wasty.


Many of the most successful organisms are bacteria that grow it incredibly High rates and mutate and evolve quickly


"Worse is better" meets "If you could have invented Facebook, you would have invented Facebook."

Linus was better being "wrong" for 25 years than Tanenbaum and GNU Hurd was being "right" but never shipping anything of non-toy importance.


I always liked the aphorism "Better is the enemy us good".


"Perfect is the enemy of good (enough)"

https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good


My favorite take on the old saw is "If my auntie had three axles she'd be a semi-truck tractor".


I don't know. This seems more like post-rationalization than it does anything else. Note that nobody is calling Tanenbaum dumb or unintelligent. I think even Linus still highly respects his work. However, the claim that micro-kernels will be the future has been a near omnipresent claim for a long time now.

It is interesting to think that they would be easier to secure. I'm not convinced we have any evidence of that. Best evidence we have, is if you are willing to cutoff your past support, you can secure your future more easily. Supporting the past, though, has been a constant requirement of Linux the kernel.

So, I wish the Fuschia folks well, but it is hard not to be doubtful on that front.


> and Linux provided that home on the best dollar-for-dollar processors on the planet.

FreeBSD was very competitive. They both were launched around the same time, first version of Linux released in 1991, 386BSD in 1992. They share a lot of features, i386 target, posix compatibility. BSD pioneered some very important technologies, kqueue (epoll was couple years later in Linux), jail (docker was a decade later), etc.

I still don’t understand why everyone uses Linux now, but BSD only stayed in very niche markets.


I don't remember where I first heard it, but the best formulation that I've heard over the years is that the BSDs are for people that love Unix; Linux is for people that hate Windows.

I honestly think there's a latent explosion of interest waiting for the BSDs in the near future. The sprint to bring a "modern UX" to the Linux ecosystem in the last decade has made it extremely easy for newcomers to adopt, but has also made it pretty nigh impossible to figure out what's actually going on in the background if you weren't along for the transitional periods. Whereas with the BSDs, the immediate learning curve is much higher, but at the end of that curve, you have knowledge of a coherent system that obeys legible and documented rules and the feeling, if not the reality, that you, the user, are in control of your computer.


The explanation I've always heard is that the AT&T lawsuit [1] kneecapped BSD just as the x86 BSD variants were getting started.

[1] https://en.m.wikipedia.org/wiki/UNIX_System_Laboratories,_In....


> I still don’t understand why everyone uses Linux now, but BSD only stayed in very niche markets.

Myself and others would argue it's because of the GPL. If you can't distribute a binary without releasing the source, more contributions are going to flow upstream.


> I still don’t understand why everyone uses Linux now, but BSD only stayed in very niche markets.

The AT&T and BSD lawsuit shutdown BSD development for a year or so and is cited as a major reason why Linux became the dominant POSIX open source operating system[1]. Marshall Mckusick[2] says at the end of a talk that he asked Linus why he didn't use BSD and his response was the uncertainty of the lawsuit[3].

[1]: https://en.wikipedia.org/wiki/UNIX_System_Laboratories,_Inc..... [2]: https://en.wikipedia.org/wiki/Marshall_Kirk_McKusick [3]: 47:30 https://www.youtube.com/watch?v=bVSXXeiFLgk


BSD was subject to legal uncertainty at the time due to its relationship with Unix. By the time those uncertainties were resolved, Linux already had momentum.


There was a lot of FUD in BSD space back in 90s. People were worried they might get sued out of existence. And the original version of FreeBSD was lacking some critical things because they had to get rid of a lot of proprietary code to avoid the wrath of AT&T and its successors.

Additionally, FSF was in need of a Kernel and they chose Linux. There was a lot of evangelism involved. While BSD was recovering for the its troubled past, Richard Stallman and Co. were pushing Linux to the rest of the world.

Even Apple considered using Linux as a base for what would become Mac OS X — it was called MkLinux.

By the time BSDs managed to get their act together, it was already too late — Linux had gained critical mass.


I started using Linux in 93. Why not BSD? Linux was user friendlier at the time. Really. I installed BSD as well but out of the box my up arrow did not even give me my previous commands. Also Slackware was easier to install, gave more guidance. Also BSD was for top experts only. Info was harder to find.


What timeframe are we talking about here? Maybe I'm late to the party, but even for "special interest" Linux in the late 90s was already kind of well-known compared to the BSDs. (I was in school back then, no internet, and only learned about Linux from some folks already studying computer science). And I'm just using that point of data because when I came in contact with Linux there wasn't really much about FreeBSD, so either that was my bubble or it was technically competitive but had already lost the popularity contest.


> it was technically competitive but had already lost the popularity contest.

I think that’s the case. It never gained traction outside of niche markets. Most ISPs in late 90s were running a lot of BSD-based servers, companies like Apple, Juniper, Sony, Nintendo are still using BSD kernel and/or other components in their products, but it was and still is relatively little-known OS.

I can’t say I’m happy about that. I’m currently working on an embedded Linux-based firmware, I’d gladly switched to BSD. Maybe it’s just my irrational fears but I feel BSD is more reliable. However, that would require too much work to accomplish. I need stable and reliable GPU and WiFi drivers; hardware vendors (ARM and Realtek, respectively) only ship drivers for Linux, and I don’t think I’m competent in that area to attempt to port them, I never did substantial work on Linux nor BSD device drivers.


> I still don’t understand why everyone uses Linux now, but BSD only stayed in very niche markets.

People don't really use Linux, they use GNU. GNU utilities tend to be much more feature rich than their Unix counterparts. GNU and Linux are so intertwined it's easy to forget.


I could be wrong, but I think that GNU could run on top of BSD just as well as on top of Linux. If that's correct, then GNU doesn't explain why Linux beat BSD at all.


I don't do web stuff much at all. But when I think about it from a embedded view point what people do seems all wrong. You almost want the ability to fire off a thread in a language that just does immutable stack allocation. It's a hard constraint, but it'd be fast and easy to debug. Get rid of mutable data, get rid of persistent state and heap allocation.


"Get rid of mutable data" doesn't play well in embedded (at least for many applications). I have worked on an embedded system where the essence of the problem itself was one giant shared mutable state. (It was a video router for television stations.)


I spend a lot of time trying to contain state in event driven software with callbacks, it's a big hassle.

My thought was that there are other cases where you are really just doing batch processing blocks of data and then performing some action. And not holding state internally between runs. That's the usage case.

Thought was if you don't have neither mutable data and don't reuse stack frames. You can see exactly what happened if something goes wrong. And avoiding the heap means it's fast and deterministic.


You're leaving out the other big new trend of today, which is containers like Docker where now people like to run applications from within.

You're right, something is quite wrong, but Minix isn't the answer either, as it doesn't provide the isolation that containers and VMs do.


I think the point of containers is to make the exact needs of your environment configurable and runable in many different context.

That concept is very valuable and in the future this configuration might compiled to a unikernel that runs directly on the hyperwiser. Or maybe in a process on a microkernel OS that runs isolated processes.


Containers that just do resource allocation and minimal isolation (masquerading as configuration management somehow) are neglecting the power of strong network and storage layers over BSD Jails (although still maybe up to par with Solaris zones only recently?) that made them indispensable for systems administration at high velocity and scale. The fact we can use pluggable SDN stacks or different filesystem drivers like ZFS and LVM instead of bare bones overlayfs is important for an ecosystem.

In contrast, VMware with its multi-year cycles to release pluggable storage and network that was only available to a few already-entrenched vendors stifled progress under the vSphere ecosystem. I knew it was over for VMware when most of the large-scale customers were simply running containers in ESXi or if they were on OpenStack they did the same thing with an additional layer. With productive organizations, the needs of application developers always supersedes the needs of those managing the applications in production barring something non-negotiable like security / regulations.


For that, chroots (schroot) and tar.gz are almost sufficient.


Agreed. In addition to the factors you outlined, another reason behind the success of Linux in enterprise is Java. Java essentiallly enables people to ignore the OS altogether. So in essence, Linux becomes a dumb platform which can, in theory, be replaced anything else as long as it supports Java. Android is the same story. It may have Linux underpinnings, but that is out of convenience, not necessity. It has no bearing on the actual applications. Google could replace it with something else (Fuschia if it succeeds, or anything else like BSD, or technically, even Windows) any day with almost no effect.


security is indeed hard. however, people building microkernels at that time were not doing it for security reasons, and it is not clear that the types of microkernels they built (e.g., mach, with a huge unix "personality") had any better security properties than typical kernels. So I think it's a bit generous to look back and say "yeah tannenbaum wasn't incorrect".

the hypervisor part of your argument is more reasonable - the amount of virtualization on a machine these days seems excessive, with each level solving a different problem, and a cleaner solution waiting to burst forth. I suspect, however, with new containerization and other isolation technologies, that we'll come to an answer soon enough, and Linux will still be there underneath, monolithic and all.


> Linux wouldn't have been a solution on those platforms

And why is that? It takes a while for the most popular hardware to change and linux could easily track it.


> Apologies to ast, and thanks to John Nall for a friendy "that's not how it's done"-letter. I over-reacted, and am now composing a (much less acerbic) personal letter to ast. Hope nobody was turned away from linux due to it being (a) possibly obsolete (I still think that's not the case, although some of the criticisms are valid) and (b) written by a hothead :-)

> Linus "my first, and hopefully last flamefest" Torvalds

Comparing this to his more recent emails, it doesn't seem him he has mellowed with age!


Yeah, I came here to post that "last flamefest" part. Oh Linus, it certainly won't be your last...


> "Be thankful you are not my student. You would not get a high grade for such a design :-)"

It's really amazing to read and see all sorts of good points from both sides, and know that if it weren't for the benefit of hindsight, I'm not really sure which side I would agree with.

But I really strongly get the feeling that Andy had a great opportunity thrown into his lap, but by looking at MINIX as a "non-serious" hobby (as Linus implied) that he only did for fun and to unwind, and by not really listening to his user base (make it free, fix filesystem race conditions, add POSIX compatibility, etc), he really threw the opportunity away for it to grow. Which is fine if growth was never the goal.

And sure, a project owner doesn't have to say yes to every feature, and they actually probably shouldn't. But you have to let what you're creating evolve into something that meets needs, and turn into something different. Look at the first page of commits on Semantic-UI's repo. It was called Origami (then Shape) and was meant for doing 3D stuff in JS, before he renamed it to Semantic and turned it into a JS-based version of Bootstrap. Terrible example and doesn't really apply here but I remembered coming across that fascinating bunch of commits the other day and wanted to share it with someone.


I've always thought, however, that Torvalds was in fact Tanenbaum's student in Univ. of Helsinki... Well, if ast stated that linus was not, then, obviously, he was not at all.


Tanenbaum has never taught at the University of Helsinki, so it would have been difficult for Torvalds to have been his student there...


Good point! Andrew Stuart Tanenbaum (born March 16, 1944), sometimes referred to by the handle ast,[6] is an American-Dutch computer scientist and professor emeritus of computer science at the Vrije Universiteit Amsterdam in the Netherlands



In a way Tanenbaum won.

Micro-kernels are everywhere on high integrity computing, run on the radios of mobile OS.

User space drivers, containers, hipervisors, Project Treble are all examples of trying to bend Linux into the same design.


They were both right, it's just dependent on the timeframe.

Monolithic kernels "won" because they made pragmatic design decisions based on the hardware and software available. Linux in particular worked well partly because it built on a 20-year-old OS design.

Microkernels have had another ~30 years to develop and they are now performance competitive with monolithic kernels [1]. There is still a large legacy investment in monolithic kernels, but as security becomes increasingly important ... it's hard to see monolithic kernels staying the dominant form of OS design in the future.

Tanenbaum is a computer scientist, Linus is a computer engineer.

[1]: https://os.inf.tu-dresden.de/pubs/sosp97/


The article you linked to is 21 years old. Microkernels is the fusion power of software, always 50 years in the future. They're great in theory, it's just that nobody got them to work outside the lab.

As for security, yes, there are still vulnerabilities in the Linux kernel, and a microkernel might be helpful here, but it seems to me that the high-impact vulnerabilities we've seen in the past few years have been outside the kernel (spectre/meltdown and rowhammer in hardware and heartbleed in userspace). It's not obvious that microkernels will give a big enough security benefit for the investment to catch the technology up to the state of the art.


Funny because apparently they run on my phone broadband radio chip, in many industrial automation control systems, medical devices, avionics and cars.

It sounds pretty much outside the lab to me.


You (like Linus) are right in that microkernels don't provide much of a security benefit in-and-of themselves. But taking full advantage of other advances in information security research requires a new OS architecture.

The financial consequences of poor security are becoming more severe and the cost of developing high-assurance code is dropping rapidly [1]. If we haven't crossed the cost/benefit ratio yet, we are very close.

Microkernels are already the preferred architecture in niches that don't require legacy software. Monolithic kernels (and the Linux development process) can't catch up to the state-of-the-art.

[1]: https://microkerneldude.wordpress.com/2016/06/16/verified-so...


*software engineer, not computer engineer. Computer engineering focuses more on hardware and is a specialized field within electrical engineering.


Derp!


Although, I would be curious what Linus would do with an FPGA.


Tanenbaum made more predictions than "microkernels will win":

> 1. Microkernels are the future

> 2. x86 will die out and RISC architectures will dominate the market

> 3. (5 years from then) everyone will be running a free GNU OS

> The usual defense of the first prediction is to point to the growing use of virtualization hypervisors and claim that this counts. It doesn't; hypervisors are not microkernels. Neither are kernels with loadable device drivers and other sorts of modularity; well structured kernels are not anywhere near the same thing as microkernels. When Tanenbaum was talking about microkernels here, he really meant microkernels.

https://utcc.utoronto.ca/~cks/space/blog/tech/TanenbaumWrong

Trying to make him "Right" does great violence to his arguments, and we're better served by looking at history honestly, not through distorting lenses.

Another interesting post, about hypervisors versus microkernels:

https://utcc.utoronto.ca/~cks/space/blog/tech/HypervisorVsMi...

(Hypervisors are older than microkernels, and they're more practical, and they are, therefore, not the same thing.)


A question I'm not equipped to answer but keeps coming up for me every time this conversation or user-space drivers comes up:

If we had microkernels, would containers be a thing right now? Or would we just have orchestration tools (eg, Kubernetes)?

I'm old enough to recall when protected memory and pre-emptive multitasking were the new kernel services and all of the literature for those greatly overlaps what we use to sell containers now. Like we never achieved the promise and we are trying again instead of fixing it.


Containers would still be a thing, but its implementation would be much easier. What jails (a.k.a. containers) actually do is try to build a fence around a process so it looks as if its isolated. With a microkernel, this process would be much easier since every component of the kernel is already isolated. The actual kernel would remain the same and wouldnt need any fences (it just routes between components), and the container would just launch new instances of the kernel modules it would need.


If we had better languages and tools for integrating various parts of the tech stack, then perhaps we wouldn't need such monolithic constructions. See the Mirage unikernal project for an alternative to containers and virtual machines running full blown operating systems.


In a way Tanenbaum won.

Winning "in a way" should be taken as a signpost for a tremendous opportunity lost through bad outreach or PR.


Based on the most widely used computing device today — the smartphone — he was also right on the RISC architecture thing (seeing as ARM is a RISC-style arch), and that trend is starting to expand to the datacenter too.


Tanenbaum is criticising Linux for not being designed for portability - yet it was ported to DEC Alpha and SUN Sparc (a RISC processor) already in 1995. And (almost) all these smartphone RISC CPUs are indeed running a Linux kernel today.

As someone else remarked, this is a pretty clean-cut showcase of the friction between computer science and computer engineering. Tanenbaum is theoretically right, but Linus built a platform that powered a revolution.


With the help of Compaq, SGI, Cray, Oracle, IBM and Intel, among others.


For those that don't know.. Tannenbaum wrote the book on Operating Systems... Well a book. I think his OS Minix[2] is in there. Its on every intel CPU, but I think that was news to Tannenbaum.

This argument was at its root a "theory" vs. "getting it done" now argument. I think thats why it resonates.

[1]https://www.amazon.com/Modern-Operating-Systems-Andrew-Tanen... [2]https://en.wikipedia.org/wiki/MINIX

This is a debate about how OS's should be structured. There is some merit to Tanenbaums MicroKernel approach, but ultimately its slower and for an OS that matters, thus Linux being ascendant . Also everywhere gcc was ported could compile and run linux.

Microkernels "lost" but there was a microkernel called "Mach" worked on by Avie Tevanian who went to Next then to Apple. I think OS-X is kinda microkernel based (if they're still using XNU which is loosely based on the mach microkernel.).

https://en.wikipedia.org/wiki/XNU

I still miss HPUX RTE for PARISC(Real Time Extensions) for sending jobs to specific processors or processor sets, disabling interupts....

If you like OSs. I like the case studies in the dinosaur book: (stupid text-book prices though....)

[4] https://www.amazon.com/Operating-System-Concepts-Abraham-Sil...


Afaik, macOS and Windows (and probably all modern kernels) use something halfway between microkernels and monoliths, I think they call it hybrid kernels. The reason being there's benefits to both approaches, so why not both?


Because you might not like the associated drawbacks.

Linux (and the BSDs) have shown that a monolithic kernel can be modular and well-organized.

Mach’s descendants, L4, etc. have shown that if you want decent performance you either have to redefine “microkernel” or you have to put in an incredible amount of work to implement your simple design.


> This basically was at its root a "theory" vs. "getting it done" argument.

Because worse is better. https://www.jwz.org/doc/worse-is-better.html


Copy&paste the link. Do not click.


Link to a better source, not so someone who trolls HN.

https://en.wikipedia.org/wiki/Worse_is_better


I've reread parts of it (2nd edition from 2001) on my vacation the last weeks and at least in the first half of the book there's no mention of microkernel vs Linux - on the other hand, several examples are given "this is how Windows does it, this is how Linux does it" - so there definitely don't seem to be any underhanded sidestabs in there.


So funny. Linus would be probably kicked out of Tanenbaum's OS class, and most of Andy's predictions were incorrect (still waiting for my GNU/HURD on MIPS). The lesson is not to get discouraged by prevailing winds of academia I guess, they have different objectives, and their processing of reality is on a more abstract level, meaning ideas might look better in theory, but not so great in practice.


It is running on the radio chip of your mobile phone.

https://gdmissionsystems.com/products/secure-mobile/hypervis...


Which is mildly interesting, but a long way from "microkernels are where OSes are going in the next decade or two (from 1992)".

Microkernels were not a completely worthless idea, and they have appeared here and there. But in terms of OSes, they lost for three decades after Tannenbaum's statement.


Netiquette[1] mentioned. I just re-read it after so many years. :) This makes me realize how old I've become (for they made us read some pre-RFC versions before getting accounts to Sun workstations in college). Oh how civilized the online discussions used to be. And how the community self policed, until it didn't[2].

Maybe FB, Reddit and others should put their AI researchers into good use and make people and bots alike take a hard to cheat test on Netiquette before being allowed to post stuff that will be seen by others. And implement a one month 'observation-only' period as suggested by Netiquette[1]... ;-)

[1] https://tools.ietf.org/html/rfc1855 [2] https://en.wikipedia.org/wiki/Eternal_September


Tanenbaum reminds me of those "smart people" Art Williams was talking about in his 1987 "Just Do It" speech. Good to learn of another example of "more smart" not being "more successful".


> Tanenbaum reminds me of those "smart people" Art Williams was talking about in his 1987 "Just Do It" speech. Good to learn of another example of "more smart" not being "more successful".

There is essentially no definition of "unsuccessful" that applies to Andrew Tanenbaum's career. He spent the last few decades doing the sorts of things you'd expect a CS professor to do, and he did them really well. He conducted research projects, graduated PhDs, and the CS textbooks he authored are almost all extremely good.


He did by creating MINIX. His loss to Linux was not from an unwillingness to get his hands dirty and build something.


MINIX was an educational demo. He slapped on a restrictive license even though he didn't make money from it, because his ego didn't want other people to touch his baby. Then he never made time to build anything useful with it. Being right about something you aren't going to build is worse than being a little wrong about something you build.


https://en.wikipedia.org/wiki/MINIX_3

> Its use in the Intel ME makes it the most widely used OS on Intel processors starting as of 2015, with more installations than Microsoft Windows, GNU/Linux, or macOS.

That's... not too shabby, if you ask me.


Did everybody else reading this just disregard other people's comments on that thread and skip it just to read ast and torvalds debatting?


I really thought I was going to, but the other replies provide so much context, plus it's kind of like time traveling and being a fly on the wall in a conversation that took place over 25 years ago. I'm just imagining them all looking like the cast of Friends while sitting at their 15-inch 90s PC screens typing these electronic-mails in software I wouldn't recognize. And you have people chiming in and calling out argument flaws and offering suggestions on new words for "make POSIX" like posixiate. Fun stuff! It's like the comment section of an HN post. Which is funny because HN evolved out of mailing lists like that, and the culture did too.


Same feeling here, I usually skim these threads for the big names, but in this case there is so much historical context that makes this a great read. It would be wonderful to have been a subscriber to this mailing list back in the 90's.


it was a newsgroup - usenet was still a thing, and not just about binaries, back then


Back then, nobody realized how bloated the Linux kernel was going to become, with way too much in kernel space. The kernel is somewhere above 15 million lines of code now.


It should be noted that in the nineties the Linux kernel was very far from being as modular as it is today. I don't know when dynamic module loading was introduced, but I recall in the very early 2K one had to recompile the kernel for nearly every new device added to the machine. Even when dynamic modules were available, not all devices could be managed as such and sometimes the driver code had to be statically linked to the kernel anyway. Today things have changed: pretty much every new device install/insertion will let the kernel automatically load the relevant device driver without any need to either insert a driver CD or go online to download it. Of all those millions lines of code, only a very small part is actually executed.


According to the LKM Howto, support was added around 1.2 (1995).

http://tldp.org/HOWTO/Module-HOWTO/x73.html#AEN90

Around 1.2.8 HJ Lu added the ELF kernel module support:

http://tech-insider.org/linux/research/1995/0526.html

I remember a lot of this and feel old now.


Me too, my first Linux distribution was Slackware 2.0 with the kernel 1.0.9, with initial support for IDE CD-ROM drives.


"Bloat" is a loaded term. It's meaningless to say "Look at all this code! It's bloated!" when you don't know the reason those lines are there, and comparing two "solutions" is meaningless when they don't solve all the same problems.

Mainly, I'm just getting tired of people trying to cut things out of the solution by cutting them out of the problem.

Saying that it's possible to solve a problem in a simpler fashion is fine. Saying a problem shouldn't be solved, that nobody should have that problem, so therefore we won't solve it, is not fine if you then turn around and compare your cut-down solution to the full solution.


I think AST did. IMHO, drivers belong in user space. There's an exponential number of devices out there, jamming them into the kernel has been a messy chore at best.


As someone completely alien to that side of computing except rudimentary stuff most people know, I never understood the reason why the drivers have to be maintained in the kernel tree. Maybe include the most common stuff, but why not have a second, distinct linux-drivers project that maintains those stuff in their own time?

Also, why we don't have a common interface to drivers at least between Linux and BSD by now? I don't like that there's animosity between the two ecosystems. That way we could have, if technicalities permit, a way for many diverse OS models to enter into the competition: if the drivers are written agains a certain standardised interface, one can code-up an OS to support that interface, and just like with POSIX, the OS would have a whole suite of drivers available from day one. Better for the driver authors too.


Linux kernel maintainers are actively hostile to the concept of a stable driver ABI. They believe that drivers are best maintained with the kernel so that they can be kept up and fixed.

Given that Linux has the broadest hardware support of any open source OS, and arguably any OS period, it's difficult to say they aren't right.


Which is exactly what Project Treble introduces, it converts Linux into a micro-kernel design, where each driver runs in its own process with a tiny kernel presence, communicating over Android RPC mechanisms.

https://source.android.com/devices/architecture/hidl/


Are Treble drivers compatible with Fuchsia? That'd be cool.


No idea, but I don't think so as they are completely different architectures.


Genode has an ABI for userspace (which includes OS components and drivers) that works across multiple kernels, which are mostly microkernels, but also Linux.

Considering Google designed both Treble and Fuchsia, it's not such a crazy idea.


I'm not saying they are explicitly wrong, but I don't think that's a useful metric. Adoption drives the drivers, not design. That's close to saying Linus was right because Linux is more used than Minix. As a fellow HNer has pointed out, companies are figuring out that depending on hardware vendors to do the right thing is a bad idea, hence Project Treble. Then again, I have a feeling it won't be too long until they scrap linux altogether.


> That's close to saying Linus was right because Linux is more used than Minix.

Well, Linus chose usefulness as a higher priority than theoretically optimal, so by Linus's standards, yes, Linus was right.

> Then again, I have a feeling it won't be too long until they scrap linux altogether.

Um... don't hold your breath. I think you're going to be waiting a long time for that one.


Sorry if my last point wasn't clear, I was speaking strictly of Google/Treble. Fuschia seems to be coming right along, I can see them going live with it within a few years.


It sounds very interesting... I rarely go deeper that connected systems or application development myself. But would love to see some better underpinnings to a flexible UI/UX on top. I think there's lots of room for both.


Nit: AIUI, the kernel has no stable API. It has no ABI period.


There has to be an ABI in order for loadable kernel modules to exist. That's the part that's infamously unstable.

API stability is not guaranteed, though it's stable enough that the likes of Nvidia can write installers for their proprietary drivers that recompile a bit of interface glue between kernel and blob.


> There has to be an ABI in order for loadable kernel modules to exist

Given a particular install, yes, but it will change independent of kernel version based on e.g. build flags and even conceivably compiler chosen, etc.

> it's stable enough that the likes of Nvidia can write installers for their proprietary drivers that recompile a bit of interface glue between kernel and blob

Well...they certainly can. It also can infuriate me beyond all reason...


The way I've understood it is that proprietary kernel modules contain an opensource part that is included in the kernel which makes a small stable API available which is then used by the proprietary kernel modules.

Making a stable API that covers the entire kernel would be a monumental task on the other hand.


They are maintained in the kernel because the kernel interfaces are not stable, by choice. Here is the explanation from the kernel docs:

https://github.com/torvalds/linux/blob/master/Documentation/...


That's exactly the kind of dismissive, arrogant nonsense that I won't bother reading, even if it is the formula to an infinite life:

> You think you want a stable kernel interface, but you really do not, and you don't even know it.

...

> It's only the odd person who wants to write a kernel driver that ...

And:

> stable-api-nonsense.rst


Yeah right, read the couple lines that are intented as one-line TLDR and dismiss it as arrogant nonsense; you probably won't be writing drivers anyway.


The document itself is a blunt dismissal it seems, anyways. And yes, I won't write any drivers if I can avoid it.


Because a kernel is the interface between applications and hardware resources. Modern microkernels are ~50% architecture dependent code.

Creating a common interface would require all OS vendors to collaborate on some standard and maintain a leaky abstraction over their preferred abstraction. You can do that, but there are feature, performance, and maintenance compromises that must be made.


Keeping a common interface is hard.


Isn't it harder to keep rewriting stuff?


Apparently not ;)


I didn't read the web, but is there a way to have a kernel running, listing what drivers it uses and afterwards, provide a configuration to recompile it with just those drivers (hence removing the bloat).. Or maybe that's not the way it goes, Linux just load the drivers it needs...


Yes it's called modprobed-db. But it only tracks what modules are loaded, not what's used in the kernel. Although maybe you could compile everything you can as modules first, then check what's loaded and then compile with all those modules build in.


It's 15 million of codes the full source, that includes supports for like 10 different hardware architectures, and device drivers for every hardware that was ever made!

When you compile the linux kernel you will compile only a small fraction of these lines of code, and distributions typically compile drivers and a lot of things as modules, so you load them only when you need them. You can compile the kernel to fit in as much as a couple of Mb of space, very useful for embedded systems (my openWRT router has a 8Mb of internal flash memory, and runs Linux 4.4!)

And if we compare Linux with for example Windows is pretty small, Windows kernel source is said to weight a couple of gigabytes, and this for a kernel that supports only one (ok, now two with Windows for ARM) computer architectures and doesn't include device drivers (besides the basic ones)


Over 20 million lines of code even. But you only run 1.5 to 3 million.


> Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5.

This will always be my favourite quote from this exchange. Just imagine what desktop computing would look like today had that come to pass.


For those readers that are curious who Tannenbaum is, he wrote (the best IMHO) operating systems book [0]. This book contained the complete source of an operating system 'Minix'. At the same time Torvalds wanted to build a OS system. Did Torvalds use/steal or extend the Minix source? It is extremely controversial [1] amount CS types.

https://www.amazon.com/Modern-Operating-Systems-Andrew-Tanen... [0]

http://wiki.c2.com/?MinixOperatingSystem [1]


Unless they are self taught I doubt there is anyone here who doesnot know who Tannenbaum is. Try going through a computer science curriculum without hearing of him.

Which reminded me again in what a marvellous period we still are with our discipline. There are still quite a few of the guys out there who "wrote the book".


Because you misspelled it twice, Tanenbaum‘s OS is called Minix.


Linus did read the book, which is how he got interested in OS development.

He also ran Minix to work on Linux until Linux got solid enough to dogfood.


Network effects don't only depend on quality. Is basically the summation of this whole debate.

If you had 2 teams and both could start from 0 and spend 5 years to develop an operating system most people would probably pick some version of microkernel design.


Most people would pick the OS that was of a microkernel design? No way. Most people would pick whichever one was most solid and/or had the most features.


I mean for development.


Oh, I see. I will not argue against your position, now that I understand it.


The most insightful thing about this repost is that Google groups is still around. Tried to find it recently and couldn't. Woohoo!


One thing that jumps at me is how childish and rude Linus is in his response. I think at that time he was 23 years old, so an adult.

As a piece of advice to young people, try not to be rude and accept criticism with grace, as the Internet has a long memory and you'll probably not be respected enough for your achievements to be tolerated, like Linus is.


I expect more from Tannenbaum. The initial email subject is hard to take as anything other than an insult, and statements like “you would get an F in my class” really only say “I have some authority, and if I can’t make a coherent argument, I’ll still lord it over you.” I wonder how many students saw that exchange and thought “I really need to get into his class.”

I think the discussion would have gone much smoother if the professor made some effort to get his message across without the unnecessary sniping. He could have simply started with “I understand Torvalds used my book, so I find it interesting—and maybe a little perplexing—that Linux has such a wildly different design from what I presented.”


Linus is still rude or rather, blunt. To be honest, though, i prefer blunt over fluff or politically correct way.

Way more informative.


Bluntness doesn't increase the information content of a message, it's just a way to force one's message through. Everything that can be said bluntly can be said politely, and usually to greater effect.

I'm always confused by people who defend Linus style because it isn't just blunt but often plain insulting for no apparent reason at all.

There are many problems with this. Apart from antagonizing people as already mentioned, if someone is constantly screaming they lose the ability to modulate how serious they actually are. You can rarely tell whether Linus is legitimately upset because he seems almost always upset. It's better to default to politeness if only because if you're actually upset at least I know that I need to pay attention.


Some people really appreciate the blunt communications style. Some people don't. I personally believe one should be able to appreciate all communications styles, because it provides you access to more information. Ignoring something smart because one got bad feelings from how it was said strikes me as rather puerile.


> I personally believe one should be able to appreciate all communications styles

Wut? Some communication styles are just "harassment", including Linus'. One can put up with it, but appreciation? BS. Linus is worse than Kalanick.


It is about culture. He comes from a background where people say what they think without trying to make it sound nice.

Many times i think in my mind about something: this is just stupid. He says it out loud. That is the difference.

If one can see behind the rude attitude, there is plenty to learn.


I like people like Linus that say directly and without filters what they think, if some work is crap they say it, if someone needs to be insulted they do it, and sometimes is useful (look at NVIDIA for example, since Torvalds insult they improved Linux support).

And these kind of people tends to be more appreciated than those who are politically correct, for example just look at Trump.


Linus is rarely blunt (i.e. to-the-point and without sugar-coating) and almost always resorts to irrelevant personal attacks that add little to nothing to the discussion. It's rather unfortunate that so many people look up to him, despite his serious personality flaws. Perhaps its because we all have this fantasy of being Dr. House; the brilliant jerk whose always right. Linus's rants typically contain only a sentence or two of actual information, with the majority of his email being devoted to a level of emotional immaturity that would result in a fist-fight in the real world.

Edit: To give him some slack, maybe there was some charm, years ago, in a community project being run like a typical vitriolic usenet board (think an early 90s version of 4chan or reddit). It's quite anachronistic today and in the context of what Linux now is, with professional developers.


> It's rather unfortunate that so many people look up to him, despite his serious personality flaws.

It's as unfortunate as you decide it to be. A lot of engineers and scientists love reading Linus (and watching House) because it reassures them that an argument's worth can transcend its delivery. The world is full of successful, socially competent salesmen. It's great that we have role models that aren't that.


This is a common sentiment, but I disagree.

In technical discussions, rudeness and informativeness are orthogonal. There is some information that is not polite to convey (such as information about someone else's personal/physical characteristics) but that information isn't relevant to technical discussions. Sometimes there is a choice as to how rudely one wants to convey a particular fact, but that is indeed a choice that reflects more upon one's desire for politeness or rudeness than it does any actual constraint on the transmission of factual information.


Rudeness is inferred or "felt" by the other party, and really worrying about peoples feelings or emotions is like worrying about which direction the wind is going to blow today because emotions have the ability to be extremely irrational. Perceiving something as "rude" ultimately comes down to the individual who is being told the information as interpreting it as rude. In this case, anything can be viewed as "rude" under the right microscope -- in fact Linus has brought up this very point multiple times in a sort of a "individual liberty" light. If you take feelings out of it and focus on the information -- Linus is either right or wrong. Something either sucks or it does not suck, for example. We can talk about how he might be wrong -- maybe it doesn't suck here's point x y and z refuting his a b c on why foo system suck.

You can certainly choose not to converse with Linus, just like I choose not to converse with overtly politically correct people who may take offense / report little thing, I personally don't have time to worry about if what I said "might" come off as a certain way and really I wouldn't want to be in that situation anyways -- It is exhausting to always have to editorialize what you are going to say in today's hyper-sensititve-filter-bubble-one-worldview monocultures.


> Rudeness is inferred or "felt" by the other party

Not really. When you attack/insult someone, instead of attacking his ideas, it's pretty much universally accepted that you're being an asshole. It's not a matter of debate, interpretation, feelings or political correctness. In his first reply Linus is being an asshole and there is no justification about that. Those who defend him should note that even he realized that and apologized.


> If you take feelings out of it and focus on the information -- Linus is either right or wrong

That's absolutely not true, and in fact the main issue here is that Linus is very much driven by feelings and not focusing on information. I don't think anyone can seriously argue that adding this to a conversation adds any value:

“Who the f*ck does idiotic things like that? How did they noty die as babies, considering that they were likely too stupid to find a tit to suck on?“

“If you work in security, and think you have some morals, I think you might want to add the tag-line 'No, really, I'm not a whore. Pinky promise'“

And on and on


Eh, it's not that hard to be polite enough. If you focus on the facts without puffing them up with extraneous value judgments, there's no occasion for "really worrying".

Instead of saying "X sucks" or "Whoever thought of X deserves a time-traveller to retroactively induce their abortion", the discussion is better served anyway with something like "X has problem Y; Z is better because of these benchmarks/ tests/ metrics", because then supporters of X have a concrete argument to be convinced by (rather than simply cowed/shamed by) or argue against.


Without agreeing or disagreeing, I find it amusing that so many people are talking about it in this thread without any concrete examples. You could be having something Linus said in mind, and others will have other things he's said.

It looks like you're all talking past one other.


> Rudeness is inferred or "felt" by the other party

This just isn't completely true at all. People can most definitely be intentionally rude to others.


Communication has always been a two-way street.


I definitely agree, but I don't think communication is complete on the internet. True communication only happens live (in my opinion of course). When you communicate using only text, a lot is left out, therefore what many people have done on the internet since the early days is discard any inference into mannerisms, or "rudeness" or read into sentences with malcontent and just focus on the content. I guess I learned to do this at an early age on the internet and maybe have "thicker skin" but I am just really surprised at how easily offended people can get at Linus (and just in general on online conversations), when the medium itself is such a shell of actual communication.


It is possible to be informative and succinct without being an asshole - these are orthogonal to each other.


funfunfunfunction puts this particularly well: https://www.youtube.com/watch?v=YYzt71o2IvQ


It is quite possible to be blunt while still being polite and professional, Linus is rude as well as blunt, which is unnecessary.


Have you actually read some of his messages? He is mostly very nice and patient with new people but he can be harsh with experienced people who should know better in his opinion. He also almost always attacks the idea, not the person and explains his rationale. I'll take someone like him any time over a lot of corporate people who regularly go through their politeness rituals without ever making progress.

And I think the history of Linux proves him right. It's phenomenal that he has been able to keep it together and thriving for so long. That's a pretty rare achievement.


Mostly very nice is irrelevant. Being an asshole a tiny fraction of the time is still unnecessary and counterproductive.


There's not just Blunt or PC, though. One can be polite and respectful and still get their point across very effectively.


I do too. However, it is a false dichotomy that it has to be one or the other.


But Linus is still respected. Maybe not by you and some others but by a lot of people. There's no such thing as unanimous respect.


I'm pretty sure people respect him despite being rude, not because of it.


> I'm pretty sure people respect him despite being rude, not because of it.

What you misrepresent as being "rude", which insinuates that it has no positive effect on the outcome, is actually being direct and to the point regarding technical decisions. People whining about rudeness to try to put aside the context, meaning and results, demonstrate that they either miss the point or intentionally run away from it. The fact is that his communication style is a reflection of his leadership strategy which resulted in the world's most successfull operating system project in history.

Trying to downplay the main causes and consequences because they feel their feeling might not be pampered is a tremendous injustice to the man and his achievements.


No. Some of it is just rude. No "being direct and to the point regarding technical decisions".

"How did they noty die as babies, considering that they were likely too stupid to find a tit to suck on?"


> No. Some of it is just rude.

If you make it your point to intentionally ignore context and cherry-pick the hell out of the man's decades-long archive of public emails... Well, I guess you can find something that twiddles your feelings.

But, again, you need to ignore the context, his work, his achievements, and everything he built, helped build, and continuously work on building.

But hey, let's focus on your feelings.


A lot people appreciate that Linus is a role model for their own desire to be disrespectful to their peers.


Well... Tannenbaum is pretty brick-headed, too. "This is the one right way, and you aren't doing it that way, so you are by definition wrong" is not a good way to begin the conversation. Add in "I'm the authority", and you have someone who is completely undisposed to listen to anything said by someone holding another position.


Tannenbaum isn't much better, he uses his credentials and authority to argue, https://en.wikipedia.org/wiki/Argument_from_authority. Linus has nothing behind him, just his/community work which is now my OS in production. It's very interesting to read the thread today.

Does anybody fly rockets with Minix at the end of the day?


To add to this - I have watched the industry over the last decade gradually become more conscious of and less tolerant of "accomplished" assholes. The behavior is still endemic but it feels like we are on the verge of a major shift in attitude in the wake of #MeToo. Over the last year companies, charities, clubs, and other organizations were finally willing to (forced to) deal with toxic behavior and are finding that they can survive just fine without the supposed "rock stars" that were fired and the toxic environments they fostered.

So even if you are a Linus Torvalds, it may only be a matter of time before you find yourself sidelined. I will be surprised if Linus's career ends on his own terms.


You're leaving out a key point: Linux is not a company and Linus's "career" does not depend on a company. It depends on enough individual people choosing to work on Linux to keep it technically competitive. I have not seen any sign of Linux suffering technically because people who could make valuable contributions are not doing so.


Actually people in general could stop being so sensitive to everything that is a tiny bit different.


I think generally people are only sensitive to people who are not mutually respectful.


A lot of "sensitive" people are not very respectful of others. From my observation it's often a very asymmetric relationship.


I find sugar coating to be highly insulting. In my experience, such people are more often secretly disrespectful and less careful with the truth.


Note that I said nothing about sugar coating. Those were words you chose.

Edit: Interesting. I initially wrote, "Please note..." but then decided to be blunt and leave off the Please. I guess I was, um, too blunt.


Interesting. I made no accusation against you. Is your comment, in fact, completely straightforward?


> As a result of my occupation, I think I know a bit about where operating are going in the next decade or so.

> While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won.

That's a theoretician confusing theory and practice.

And note that Tannenbaum isn't one of the "people who actually design operating systems". Tannenbaum designed toys to demonstrate concepts, not OSes for real-world use.

And, was Tannenbaum right that NT was a microkernel design? I have this vague memory that the drivers were supposed to be in user space, but they moved at least the video drivers into the kernel for performance reasons.


Ah yes, this oldie newsgroup post from ast, the guy whom wrote a bunch of mandatory CS textbooks and MINIX 2 (before the course moved to FreeBSD) which we had to hack on in ECS150. I’m sure he’s crazy rich by now from passive income.

A hypothetical kernel&userland like OpenBSD built on seL4 (a-la DragonFly or MINIX 3), implemented in Rust, OCaml or Haskell, with thorough unit, integration tests and perhaps formal proofs, seems like the safest, not necessarily the fastest, kernel approach possible using modern technologies. seL4 communicating by sending messages is really efficient and builds in IPC as a fundamental, low-latency operation.


> A hypothetical kernel&userland like OpenBSD built on seL4 (a-la DragonFly or MINIX 3), implemented in Rust, OCaml or Haskell, with thorough unit, integration tests and perhaps formal proofs, seems like the safest, not necessarily the fastest, kernel approach possible using modern technologies.

Seems to me like an approach that would take even longer to implement than Hurd did.


If anybody can provide an alternative source for the text thanks in advance. It requires a google login for reading.


You can open it in a private browsing window. It only requires login if you were previously logged in to a Google account, and that login has expired. Incredibly annoying.



Reading it now without login.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: