Linux to me is an example of the wrong thing at the right time. Linux embracing Posix and fully supporting Intel gave programmers exactly what they wanted right at the nascency of the dotcom boom. Apache, MySQL and PHP needed a home that wasn't Windows ... and Linux provided that home on the best dollar-for-dollar processors on the planet.
If Motorola or IBM had challenged Intel with cheap, better processors then history might have looked different because Linux wouldn't have been a solution on those platforms.
Linux works well enough I suppose, but the fact that we have this gigantic mess, where we run an OS, and then run a hypervisor on it, and then run more copies of an OS, and then finally get to run our application code, tells me something is quite wrong.
The fact that unikernels are even a thing tells me that people DO feel the pain of monokernels and yearn for something simpler. Securing a monokernel like Linux, even one so open and scrutinized, is really, really hard. There is just so much surface area in the kernel to attack. Everything is so heavyweight.
I think Tannenbaum was right, but he just wasn't enough of a hacker to get things done in time to catch the wave that the dotcom boom created.
To me the dream would be something like Google's new Fuschia OS where everything is capability-based so security is built into the OS concept from the very beginning.
One could consider OpenSolaris as something that could have filled that space in alternative future.
Check out Redox OS, it has many cool concept and is written in Rust.
Amen to that. Nature's biological organisms grow slowly, as does real knowledge. Poor nutritional choices lead to poor health. Natural ecology has adapted over 4 billion years. Hasty applications are wasty.
Linus was better being "wrong" for 25 years than Tanenbaum and GNU Hurd was being "right" but never shipping anything of non-toy importance.
It is interesting to think that they would be easier to secure. I'm not convinced we have any evidence of that. Best evidence we have, is if you are willing to cutoff your past support, you can secure your future more easily. Supporting the past, though, has been a constant requirement of Linux the kernel.
So, I wish the Fuschia folks well, but it is hard not to be doubtful on that front.
FreeBSD was very competitive. They both were launched around the same time, first version of Linux released in 1991, 386BSD in 1992. They share a lot of features, i386 target, posix compatibility. BSD pioneered some very important technologies, kqueue (epoll was couple years later in Linux), jail (docker was a decade later), etc.
I still don’t understand why everyone uses Linux now, but BSD only stayed in very niche markets.
I honestly think there's a latent explosion of interest waiting for the BSDs in the near future. The sprint to bring a "modern UX" to the Linux ecosystem in the last decade has made it extremely easy for newcomers to adopt, but has also made it pretty nigh impossible to figure out what's actually going on in the background if you weren't along for the transitional periods. Whereas with the BSDs, the immediate learning curve is much higher, but at the end of that curve, you have knowledge of a coherent system that obeys legible and documented rules and the feeling, if not the reality, that you, the user, are in control of your computer.
Myself and others would argue it's because of the GPL. If you can't distribute a binary without releasing the source, more contributions are going to flow upstream.
The AT&T and BSD lawsuit shutdown BSD development for a year or so and is cited as a major reason why Linux became the dominant POSIX open source operating system. Marshall Mckusick says at the end of a talk that he asked Linus why he didn't use BSD and his response was the uncertainty of the lawsuit.
: 47:30 https://www.youtube.com/watch?v=bVSXXeiFLgk
Additionally, FSF was in need of a Kernel and they chose Linux. There was a lot of evangelism involved. While BSD was recovering for the its troubled past, Richard Stallman and Co. were pushing Linux to the rest of the world.
Even Apple considered using Linux as a base for what would become Mac OS X — it was called MkLinux.
By the time BSDs managed to get their act together, it was already too late — Linux had gained critical mass.
People don't really use Linux, they use GNU. GNU utilities tend to be much more feature rich than their Unix counterparts. GNU and Linux are so intertwined it's easy to forget.
I think that’s the case. It never gained traction outside of niche markets. Most ISPs in late 90s were running a lot of BSD-based servers, companies like Apple, Juniper, Sony, Nintendo are still using BSD kernel and/or other components in their products, but it was and still is relatively little-known OS.
I can’t say I’m happy about that. I’m currently working on an embedded Linux-based firmware, I’d gladly switched to BSD. Maybe it’s just my irrational fears but I feel BSD is more reliable. However, that would require too much work to accomplish. I need stable and reliable GPU and WiFi drivers; hardware vendors (ARM and Realtek, respectively) only ship drivers for Linux, and I don’t think I’m competent in that area to attempt to port them, I never did substantial work on Linux nor BSD device drivers.
My thought was that there are other cases where you are really just doing batch processing blocks of data and then performing some action. And not holding state internally between runs. That's the usage case.
Thought was if you don't have neither mutable data and don't reuse stack frames. You can see exactly what happened if something goes wrong. And avoiding the heap means it's fast and deterministic.
You're right, something is quite wrong, but Minix isn't the answer either, as it doesn't provide the isolation that containers and VMs do.
That concept is very valuable and in the future this configuration might compiled to a unikernel that runs directly on the hyperwiser. Or maybe in a process on a microkernel OS that runs isolated processes.
In contrast, VMware with its multi-year cycles to release pluggable storage and network that was only available to a few already-entrenched vendors stifled progress under the vSphere ecosystem. I knew it was over for VMware when most of the large-scale customers were simply running containers in ESXi or if they were on OpenStack they did the same thing with an additional layer. With productive organizations, the needs of application developers always supersedes the needs of those managing the applications in production barring something non-negotiable like security / regulations.
the hypervisor part of your argument is more reasonable - the amount of virtualization on a machine these days seems excessive, with each level solving a different problem, and a cleaner solution waiting to burst forth. I suspect, however, with new containerization and other isolation technologies, that we'll come to an answer soon enough, and Linux will still be there underneath, monolithic and all.
And why is that? It takes a while for the most popular hardware to change and linux could easily track it.
> Linus "my first, and hopefully last flamefest" Torvalds
Comparing this to his more recent emails, it doesn't seem him he has mellowed with age!
It's really amazing to read and see all sorts of good points from both sides, and know that if it weren't for the benefit of hindsight, I'm not really sure which side I would agree with.
But I really strongly get the feeling that Andy had a great opportunity thrown into his lap, but by looking at MINIX as a "non-serious" hobby (as Linus implied) that he only did for fun and to unwind, and by not really listening to his user base (make it free, fix filesystem race conditions, add POSIX compatibility, etc), he really threw the opportunity away for it to grow. Which is fine if growth was never the goal.
And sure, a project owner doesn't have to say yes to every feature, and they actually probably shouldn't. But you have to let what you're creating evolve into something that meets needs, and turn into something different. Look at the first page of commits on Semantic-UI's repo. It was called Origami (then Shape) and was meant for doing 3D stuff in JS, before he renamed it to Semantic and turned it into a JS-based version of Bootstrap. Terrible example and doesn't really apply here but I remembered coming across that fascinating bunch of commits the other day and wanted to share it with someone.
2015: https://news.ycombinator.com/item?id=8942175 and https://news.ycombinator.com/item?id=9739016.
Micro-kernels are everywhere on high integrity computing, run on the radios of mobile OS.
User space drivers, containers, hipervisors, Project Treble are all examples of trying to bend Linux into the same design.
Monolithic kernels "won" because they made pragmatic design decisions based on the hardware and software available. Linux in particular worked well partly because it built on a 20-year-old OS design.
Microkernels have had another ~30 years to develop and they are now performance competitive with monolithic kernels . There is still a large legacy investment in monolithic kernels, but as security becomes increasingly important ... it's hard to see monolithic kernels staying the dominant form of OS design in the future.
Tanenbaum is a computer scientist, Linus is a computer engineer.
As for security, yes, there are still vulnerabilities in the Linux kernel, and a microkernel might be helpful here, but it seems to me that the high-impact vulnerabilities we've seen in the past few years have been outside the kernel (spectre/meltdown and rowhammer in hardware and heartbleed in userspace). It's not obvious that microkernels will give a big enough security benefit for the investment to catch the technology up to the state of the art.
It sounds pretty much outside the lab to me.
The financial consequences of poor security are becoming more severe and the cost of developing high-assurance code is dropping rapidly . If we haven't crossed the cost/benefit ratio yet, we are very close.
Microkernels are already the preferred architecture in niches that don't require legacy software. Monolithic kernels (and the Linux development process) can't catch up to the state-of-the-art.
> 1. Microkernels are the future
> 2. x86 will die out and RISC architectures will dominate the market
> 3. (5 years from then) everyone will be running a free GNU OS
> The usual defense of the first prediction is to point to the growing use of virtualization hypervisors and claim that this counts. It doesn't; hypervisors are not microkernels. Neither are kernels with loadable device drivers and other sorts of modularity; well structured kernels are not anywhere near the same thing as microkernels. When Tanenbaum was talking about microkernels here, he really meant microkernels.
Trying to make him "Right" does great violence to his arguments, and we're better served by looking at history honestly, not through distorting lenses.
Another interesting post, about hypervisors versus microkernels:
(Hypervisors are older than microkernels, and they're more practical, and they are, therefore, not the same thing.)
If we had microkernels, would containers be a thing right now? Or would we just have orchestration tools (eg, Kubernetes)?
I'm old enough to recall when protected memory and pre-emptive multitasking were the new kernel services and all of the literature for those greatly overlaps what we use to sell containers now. Like we never achieved the promise and we are trying again instead of fixing it.
Winning "in a way" should be taken as a signpost for a tremendous opportunity lost through bad outreach or PR.
As someone else remarked, this is a pretty clean-cut showcase of the friction between computer science and computer engineering. Tanenbaum is theoretically right, but Linus built a platform that powered a revolution.
This argument was at its root a "theory" vs. "getting it done" now argument. I think thats why it resonates.
This is a debate about how OS's should be structured. There is some merit to Tanenbaums MicroKernel approach, but ultimately its slower and for an OS that matters, thus Linux being ascendant . Also everywhere gcc was ported could compile and run linux.
Microkernels "lost" but there was a microkernel called "Mach" worked on by Avie Tevanian who went to Next then to Apple. I think OS-X is kinda microkernel based (if they're still using XNU which is loosely based on the mach microkernel.).
I still miss HPUX RTE for PARISC(Real Time Extensions) for sending jobs to specific processors or processor sets, disabling interupts....
If you like OSs. I like the case studies in the dinosaur book: (stupid text-book prices though....)
Linux (and the BSDs) have shown that a monolithic kernel can be modular and well-organized.
Mach’s descendants, L4, etc. have shown that if you want decent performance you either have to redefine “microkernel” or you have to put in an incredible amount of work to implement your simple design.
Because worse is better. https://www.jwz.org/doc/worse-is-better.html
Microkernels were not a completely worthless idea, and they have appeared here and there. But in terms of OSes, they lost for three decades after Tannenbaum's statement.
Maybe FB, Reddit and others should put their AI researchers into good use and make people and bots alike take a hard to cheat test on Netiquette before being allowed to post stuff that will be seen by others. And implement a one month 'observation-only' period as suggested by Netiquette... ;-)
There is essentially no definition of "unsuccessful" that applies to Andrew Tanenbaum's career. He spent the last few decades doing the sorts of things you'd expect a CS professor to do, and he did them really well. He conducted research projects, graduated PhDs, and the CS textbooks he authored are almost all extremely good.
> Its use in the Intel ME makes it the most widely used OS on Intel processors starting as of 2015, with more installations than Microsoft Windows, GNU/Linux, or macOS.
That's... not too shabby, if you ask me.
Around 1.2.8 HJ Lu added the ELF kernel module support:
I remember a lot of this and feel old now.
Mainly, I'm just getting tired of people trying to cut things out of the solution by cutting them out of the problem.
Saying that it's possible to solve a problem in a simpler fashion is fine. Saying a problem shouldn't be solved, that nobody should have that problem, so therefore we won't solve it, is not fine if you then turn around and compare your cut-down solution to the full solution.
Also, why we don't have a common interface to drivers at least between Linux and BSD by now? I don't like that there's animosity between the two ecosystems. That way we could have, if technicalities permit, a way for many diverse OS models to enter into the competition: if the drivers are written agains a certain standardised interface, one can code-up an OS to support that interface, and just like with POSIX, the OS would have a whole suite of drivers available from day one. Better for the driver authors too.
> You think you want a stable kernel interface, but you really do not, and you don't even know it.
> It's only the odd person who wants to write a kernel driver that ...
Given that Linux has the broadest hardware support of any open source OS, and arguably any OS period, it's difficult to say they aren't right.
Considering Google designed both Treble and Fuchsia, it's not such a crazy idea.
Well, Linus chose usefulness as a higher priority than theoretically optimal, so by Linus's standards, yes, Linus was right.
> Then again, I have a feeling it won't be too long until they scrap linux altogether.
Um... don't hold your breath. I think you're going to be waiting a long time for that one.
API stability is not guaranteed, though it's stable enough that the likes of Nvidia can write installers for their proprietary drivers that recompile a bit of interface glue between kernel and blob.
Given a particular install, yes, but it will change independent of kernel version based on e.g. build flags and even conceivably compiler chosen, etc.
> it's stable enough that the likes of Nvidia can write installers for their proprietary drivers that recompile a bit of interface glue between kernel and blob
Well...they certainly can. It also can infuriate me beyond all reason...
Making a stable API that covers the entire kernel would be a monumental task on the other hand.
Creating a common interface would require all OS vendors to collaborate on some standard and maintain a leaky abstraction over their preferred abstraction. You can do that, but there are feature, performance, and maintenance compromises that must be made.
When you compile the linux kernel you will compile only a small fraction of these lines of code, and distributions typically compile drivers and a lot of things as modules, so you load them only when you need them. You can compile the kernel to fit in as much as a couple of Mb of space, very useful for embedded systems (my openWRT router has a 8Mb of internal flash memory, and runs Linux 4.4!)
And if we compare Linux with for example Windows is pretty small, Windows kernel source is said to weight a couple of gigabytes, and this for a kernel that supports only one (ok, now two with Windows for ARM) computer architectures and doesn't include device drivers (besides the basic ones)
This will always be my favourite quote from this exchange. Just imagine what desktop computing would look like today had that come to pass.
Which reminded me again in what a marvellous period we still are with our discipline. There are still quite a few of the guys out there who "wrote the book".
He also ran Minix to work on Linux until Linux got solid enough to dogfood.
If you had 2 teams and both could start from 0 and spend 5 years to develop an operating system most people would probably pick some version of microkernel design.
As a piece of advice to young people, try not to be rude and accept criticism with grace, as the Internet has a long memory and you'll probably not be respected enough for your achievements to be tolerated, like Linus is.
I think the discussion would have gone much smoother if the professor made some effort to get his message across without the unnecessary sniping. He could have simply started with “I understand Torvalds used my book, so I find it interesting—and maybe a little perplexing—that Linux has such a wildly different design from what I presented.”
Way more informative.
I'm always confused by people who defend Linus style because it isn't just blunt but often plain insulting for no apparent reason at all.
There are many problems with this. Apart from antagonizing people as already mentioned, if someone is constantly screaming they lose the ability to modulate how serious they actually are. You can rarely tell whether Linus is legitimately upset because he seems almost always upset. It's better to default to politeness if only because if you're actually upset at least I know that I need to pay attention.
Wut? Some communication styles are just "harassment", including Linus'. One can put up with it, but appreciation? BS. Linus is worse than Kalanick.
Many times i think in my mind about something: this is just stupid. He says it out loud. That is the difference.
If one can see behind the rude attitude, there is plenty to learn.
And these kind of people tends to be more appreciated than those who are politically correct, for example just look at Trump.
Edit: To give him some slack, maybe there was some charm, years ago, in a community project being run like a typical vitriolic usenet board (think an early 90s version of 4chan or reddit). It's quite anachronistic today and in the context of what Linux now is, with professional developers.
It's as unfortunate as you decide it to be. A lot of engineers and scientists love reading Linus (and watching House) because it reassures them that an argument's worth can transcend its delivery. The world is full of successful, socially competent salesmen. It's great that we have role models that aren't that.
In technical discussions, rudeness and informativeness are orthogonal. There is some information that is not polite to convey (such as information about someone else's personal/physical characteristics) but that information isn't relevant to technical discussions. Sometimes there is a choice as to how rudely one wants to convey a particular fact, but that is indeed a choice that reflects more upon one's desire for politeness or rudeness than it does any actual constraint on the transmission of factual information.
You can certainly choose not to converse with Linus, just like I choose not to converse with overtly politically correct people who may take offense / report little thing, I personally don't have time to worry about if what I said "might" come off as a certain way and really I wouldn't want to be in that situation anyways -- It is exhausting to always have to editorialize what you are going to say in today's hyper-sensititve-filter-bubble-one-worldview monocultures.
Not really. When you attack/insult someone, instead of attacking his ideas, it's pretty much universally accepted that you're being an asshole. It's not a matter of debate, interpretation, feelings or political correctness.
In his first reply Linus is being an asshole and there is no justification about that. Those who defend him should note that even he realized that and apologized.
That's absolutely not true, and in fact the main issue here is that Linus is very much driven by feelings and not focusing on information. I don't think anyone can seriously argue that adding this to a conversation adds any value:
“Who the f*ck does idiotic things like that? How did they noty die as babies, considering that they were likely too stupid to find a tit to suck on?“
“If you work in security, and think you have some morals, I think you might want to add the tag-line 'No, really, I'm not a whore. Pinky promise'“
And on and on
Instead of saying "X sucks" or "Whoever thought of X deserves a time-traveller to retroactively induce their abortion", the discussion is better served anyway with something like "X has problem Y; Z is better because of these benchmarks/ tests/ metrics", because then supporters of X have a concrete argument to be convinced by (rather than simply cowed/shamed by) or argue against.
It looks like you're all talking past one other.
This just isn't completely true at all. People can most definitely be intentionally rude to others.
And I think the history of Linux proves him right. It's phenomenal that he has been able to keep it together and thriving for so long. That's a pretty rare achievement.
What you misrepresent as being "rude", which insinuates that it has no positive effect on the outcome, is actually being direct and to the point regarding technical decisions. People whining about rudeness to try to put aside the context, meaning and results, demonstrate that they either miss the point or intentionally run away from it. The fact is that his communication style is a reflection of his leadership strategy which resulted in the world's most successfull operating system project in history.
Trying to downplay the main causes and consequences because they feel their feeling might not be pampered is a tremendous injustice to the man and his achievements.
"How did they noty die as babies, considering that they were likely too stupid to find a tit to suck on?"
If you make it your point to intentionally ignore context and cherry-pick the hell out of the man's decades-long archive of public emails... Well, I guess you can find something that twiddles your feelings.
But, again, you need to ignore the context, his work, his achievements, and everything he built, helped build, and continuously work on building.
But hey, let's focus on your feelings.
Does anybody fly rockets with Minix at the end of the day?
So even if you are a Linus Torvalds, it may only be a matter of time before you find yourself sidelined. I will be surprised if Linus's career ends on his own terms.
Edit: Interesting. I initially wrote, "Please note..." but then decided to be blunt and leave off the Please. I guess I was, um, too blunt.
> While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won.
That's a theoretician confusing theory and practice.
And note that Tannenbaum isn't one of the "people who actually design operating systems". Tannenbaum designed toys to demonstrate concepts, not OSes for real-world use.
And, was Tannenbaum right that NT was a microkernel design? I have this vague memory that the drivers were supposed to be in user space, but they moved at least the video drivers into the kernel for performance reasons.
A hypothetical kernel&userland like OpenBSD built on seL4 (a-la DragonFly or MINIX 3), implemented in Rust, OCaml or Haskell, with thorough unit, integration tests and perhaps formal proofs, seems like the safest, not necessarily the fastest, kernel approach possible using modern technologies. seL4 communicating by sending messages is really efficient and builds in IPC as a fundamental, low-latency operation.
Seems to me like an approach that would take even longer to implement than Hurd did.