I will concede that in some instances a microkernel may outperform a monolithic kernel in stability or performance or both. I am not the least bit excited about any progress made in microkernels, I feel that it can only result in much more closed systems that are easier to implement in ways that make them harder to modify. This is why I wish for Hurd to continue to fail.
Is it because you know that usage decisions of software are not based on technical merits? Or do you not want to be proved wrong? Or something else?
You're Western Digital in 2008 and you're making a TV set-top box called the WDTV-Live. I own one of these in real life universe. It runs linux, which is awesome, because that means that I can SSH into it. It runs an apache server in my home. It can download from usenet or torrents. I can control it via SSH instead of using the remote control.
In this alternative universe, WDLX going to use Hurd instead of linux, because for this small device it will certainly have better performance on their underpowered MIPS chip. And they're not going to ship anything besides what they have to, becasue this is a small embedded computer.
What happens to that homebrew community when they ship a microkernel with proprietary servers for everything, and nothing else? It's going to be profoundly difficult to develop on this. You might already see this if you own a chromebook or a WDTV-- missing kernel modules means that you simply can't do anything without compiling your kernel. Couple this with secureboot and you're locked in.
I'm no expert on these things, most of this is based on brief research from years ago. If you think that I'm wrong, please tell me why, I'd love to be proven wrong. But for the time being, I believe widespread implementations of microkernels would be very anti-general-purpose computing.
So presumably in this hypothetical case you'd be able to upload and run whatever additional servers you needed on the WDTV. You might say "but they might make it impossible to login and do that", but they could have done the same under Linux just by not running sshd - however they didn't.
Shipping an embedded appliance with a microkernel and proprietary servers again makes no sense, because it's akin to rewriting userspace from scratch on top of the base VMM, schedulers and disk I/O. Just for a TV set top?
It happens already, to keep hardware costs down. The whole point of linux is that you can pick and choose which userland services to ship... (SSH being a userland service)
> ship a microkernel with proprietary servers for everything, and nothing else?
Whats to stop them now? effort. It costs real money to create propietary programmes from scratch. One of the reasons they would have chosen linux in the first place is that half the work is done for them (decoding libraries, network stacks, hardware interfaces, communications daemons)
This is the reason for AGPL/GPL3 -- not much of an argument against modular software/kernels.
In order to do what you did in your WD TV-Live you flashed the image with a new one. Otherwise you wouldn't be able too. So even in the case of a micro-kernel you would just flash the pre-installed with a new one (voiding warranty).
But to get to the point, do you have any idea how much effort would it take for a corporation, to write a reliable httpd server that has apache's capabilities, plugins, testing and support? Then write their own update system, dhcp client and so on? It would take huge amount of $$$ and time. And most of them would probably be buggy. So either way they would have gone with Free Software, if wanted to stay in the current price range.
The closest you'll find to this alternate universe is Android.
Remember that the reason you can link against glibc is because it's LGPL and not GPL. The LGPL was created for a reason. There's also a reason why when the decision was made to release Java under the GPL, Sun explicitly added a linking exception. It's because that isn't something you just automatically get for free.
Something akin to to affero gpl?
Xen allows full OSes to be guests and run on top of it. Microkernels only allow servers to run on top of it, and those servers have to be purpose-written and cannot meaningfully be ported.
Xen doesn't provide hardware abstraction and is fully invisible (except to the extent it advertises itself); microkernels are neither.
Paravirtualization (what you did before VT-x and similar) was an oddity, and blurs these lines a tiny bit, but the distinction is fairly clear otherwise.
L4 is an archetypal microkernel, and people often run full OSes or other ported software under it, including Linux.
> Xen doesn't provide hardware abstraction and is fully invisible (except to the extent it advertises itself); microkernels are neither.
Microkernels typically don't abstract any hardware other than CPU and memory; any other drivers would run under the microkernel.
And Xen is only "invisible" if you run full hardware virtualization and no paravirtualized drivers.
> Paravirtualization (what you did before VT-x and similar)
People still use "paravirtualization" today; see the "virtio" drivers typically used with KVM.
Their highly-efficient microkernel has been doing so well that most people using it don't know it's in their smartphones. Do most virtualization solutions have a similar impact on user experience? ;)
Also Mach OS X and Windows are hybrid in design and not the monolithic traditional UNIX way.