Hacker News new | comments | show | ask | jobs | submit login

I am 100% on the side of Linus Torvalds when it comes to microkernels.[0]

I will concede that in some instances a microkernel may outperform a monolithic kernel in stability or performance or both. I am not the least bit excited about any progress made in microkernels, I feel that it can only result in much more closed systems that are easier to implement in ways that make them harder to modify. This is why I wish for Hurd to continue to fail.

[0]: http://www.oreilly.com/openbook/opensources/book/appa.html




I'm confused, I understand you think the Hurd is making a technical mistake but why do you want to fail?

Is it because you know that usage decisions of software are not based on technical merits? Or do you not want to be proved wrong? Or something else?


Let's go to an alternative universe where Hurd was successful in the 90's and it reached common usage to the extent that Linux has today.

You're Western Digital in 2008 and you're making a TV set-top box called the WDTV-Live. I own one of these in real life universe. It runs linux, which is awesome, because that means that I can SSH into it. It runs an apache server in my home. It can download from usenet or torrents. I can control it via SSH instead of using the remote control.[0]

In this alternative universe, WDLX going to use Hurd instead of linux, because for this small device it will certainly have better performance on their underpowered MIPS chip. And they're not going to ship anything besides what they have to, becasue this is a small embedded computer.

What happens to that homebrew community when they ship a microkernel with proprietary servers for everything, and nothing else? It's going to be profoundly difficult to develop on this. You might already see this if you own a chromebook or a WDTV-- missing kernel modules means that you simply can't do anything without compiling your kernel. Couple this with secureboot and you're locked in.

I'm no expert on these things, most of this is based on brief research from years ago. If you think that I'm wrong, please tell me why, I'd love to be proven wrong. But for the time being, I believe widespread implementations of microkernels would be very anti-general-purpose computing.

[0]: http://wdlxtv.com/


The idea of the Hurd is that any user is able to run whatever server they want - GNU has been concentrating on microkernels not because it's the new hotness but because they believe it's good architecture for more openness.

So presumably in this hypothetical case you'd be able to upload and run whatever additional servers you needed on the WDTV. You might say "but they might make it impossible to login and do that", but they could have done the same under Linux just by not running sshd - however they didn't.


Your example doesn't make much sense because Hurd is the servers. The microkernel component itself is GNU Mach.

Shipping an embedded appliance with a microkernel and proprietary servers again makes no sense, because it's akin to rewriting userspace from scratch on top of the base VMM, schedulers and disk I/O. Just for a TV set top?


> And they're not going to ship anything besides what they have to, becasue this is a small embedded computer.

It happens already, to keep hardware costs down. The whole point of linux is that you can pick and choose which userland services to ship... (SSH being a userland service)

> ship a microkernel with proprietary servers for everything, and nothing else?

Whats to stop them now? effort. It costs real money to create propietary programmes from scratch. One of the reasons they would have chosen linux in the first place is that half the work is done for them (decoding libraries, network stacks, hardware interfaces, communications daemons)


I suppose you've never run into an Android device that didn't run ssh out of the box, or had a locked bootloader? Perhaps not distributed with full sources so you could easily modify the system?

This is the reason for AGPL/GPL3 -- not much of an argument against modular software/kernels.


> What happens to that homebrew community when they ship a microkernel with proprietary servers for everything, and nothing else?

In order to do what you did in your WD TV-Live you flashed the image with a new one. Otherwise you wouldn't be able too. So even in the case of a micro-kernel you would just flash the pre-installed with a new one (voiding warranty).

But to get to the point, do you have any idea how much effort would it take for a corporation, to write a reliable httpd server that has apache's capabilities, plugins, testing and support? Then write their own update system, dhcp client and so on? It would take huge amount of $$$ and time. And most of them would probably be buggy. So either way they would have gone with Free Software, if wanted to stay in the current price range.


You'd achieve the same thing by building a proprietary userland, from the C library up, on top of the Linux kernel. As far we know, no one has bothered (on a large scale) because it would be a massive waste of time and money.

The closest you'll find to this alternate universe is Android.


What would happen is, RMS comes up with yet another GPL variant to prevent that scenario from happening.


Where/how do the proprietary servers come in for a kernel that doesn't want to allow them (and therefore would go through no special effort to make them possible)?

Remember that the reason you can link against glibc is because it's LGPL and not GPL. The LGPL was created for a reason. There's also a reason why when the decision was made to release Java under the GPL, Sun explicitly added a linking exception. It's because that isn't something you just automatically get for free.


Isn't this dependent upon the microkernel's license? Wouldn't it be possible to use an open-source license which also explicitly forces servers which use it to also be covered under the same license?

Something akin to to affero gpl?


But isn't basically always the tradeoff - if you want security ,you play by the rules of the company who built it ? is there even a theoretical way out of this ?


Microkernels have been running the world behind the scenes for a while now, but most people don't seem to have gotten the memo and are still stuck with associating the Mach server as representative of u-kernels in general.


There have been wildly successful microkernels. One of Xen's greatest successes was demonstrating that to encourage widespread adoption of a microkernel, you rebrand it as a hypervisor. More recently, some people have started running software directly under Xen without a full OS, including language runtimes, all without ever calling it a microkernel.


Microkernels and hypervisors are not the same thing.


What's the difference between Xen and a microkernel? They both manage memory, do CPU scheduling, provide efficient message-passing, protect "processes" running underneath them from each other, and leave almost everything else to the "processes" running underneath them.


> What's the difference between Xen and a microkernel?

Xen allows full OSes to be guests and run on top of it. Microkernels only allow servers to run on top of it, and those servers have to be purpose-written and cannot meaningfully be ported.

Xen doesn't provide hardware abstraction and is fully invisible (except to the extent it advertises itself); microkernels are neither.

Paravirtualization (what you did before VT-x and similar) was an oddity, and blurs these lines a tiny bit, but the distinction is fairly clear otherwise.


> Xen allows full OSes to be guests and run on top of it. Microkernels only allow servers to run on top of it, and those servers have to be purpose-written and cannot meaningfully be ported.

L4 is an archetypal microkernel, and people often run full OSes or other ported software under it, including Linux.

> Xen doesn't provide hardware abstraction and is fully invisible (except to the extent it advertises itself); microkernels are neither.

Microkernels typically don't abstract any hardware other than CPU and memory; any other drivers would run under the microkernel.

And Xen is only "invisible" if you run full hardware virtualization and no paravirtualized drivers.

> Paravirtualization (what you did before VT-x and similar)

People still use "paravirtualization" today; see the "virtio" drivers typically used with KVM.


Gernot Heiser states it here:

https://microkerneldude.wordpress.com/2008/04/03/microkernel...

Their highly-efficient microkernel has been doing so well that most people using it don't know it's in their smartphones. Do most virtualization solutions have a similar impact on user experience? ;)


Yeah, for example each Symbian phone has one.

Also Mach OS X and Windows are hybrid in design and not the monolithic traditional UNIX way.


Microkernels are the wave of the future and always will be.


L4 is used in the secure element of every modern iPhone. OS X/XNU is based on microkernel designs. Windows is a hybrid.


Are you asserting MacOS X is "based on microkernel designs" just because some versions[0] of Mach are a microkernel, or something else?

[0] https://en.wikipedia.org/wiki/Mach_%28kernel%29


Hrmmm. Digging deeper (and trawling my memory), I'm getting conflicting information:

* https://en.wikipedia.org/wiki/XNU

* https://youtu.be/8RwlEZ88rKM




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: