They don't have to be. All they have to do is not go around smugly suggesting they know better than kernel developers and they're ok in my books.
Incidentally, precisely how many of those research kernels have become widely used, mainstream kernels capable of high-throughput?
And do you really think it has turned out that way because the whole industry is full of blind dumbasses? I think it's a far more likely proposition that they understand something you don't.
YMMV on mainstream (they are widely adopted, though), but: OKL4, PikeOS, QNX...
It's quite obvious you have no background on the issues and are using this as an opportunity for provocation.
High throughput, mister, high throughput.
Realtime != high throughput. It just means deterministic throughput. FSVO deterministic.
Show me people running big farms of servers running these operating systems where even single-percentage computational overheads really matter.
(added:) The reason for this is that it costs one hell of a lot flipping your page tables and flushing your TLBs every time you have to switch to ("pass a message", whatever) to a different subservice of your kernel.
(also added:) Oh and interestingly many (most?) users of OKL4 go on to host Linux inside it because, hey, it turns out that doing all your work in a microkernel ain't always all that great. So 90% of the "kernel" work in these systems is happening in a monolothic kernel.
Other contenders include eMCOS and FFMK, though those are obscure.
That said, I don't even understand the logic. HPC clusters where single-percentage overheads really matter are an extremely specialized use case, so of course COTS u-kernels might not cut it. Where's the shocker here?
Response to added: Not necessarily with message passing properly integrated with the CPU scheduler.
Response to added #2: Hosting a single-server is a valid microkernel use case. What's your problem? Isolation and separation kernels are a major research and usage interest.
Ok then, show me the server farms...
I'm not even really talking about HPC, just the massive datacentres that run everyone's lives. All for the most part running monolithic kernels. I doubt the thousands of engineers who work on such systems consider the "huge monolithic kernel" "undebuggable". And I don't see examples of microkernel OSs that are able to cut it in these circumstances.
Even in a mobile device, you don't really want to waste battery doing context switches inside the kernel.
Microkernels have their place, but believing that the world that chooses not to use them are just clearly dumbasses is bullshit dogma.
(As an aside, I'll grant that even a high-throughput microkernel seems likely, to me, to have a lower throughput relative to a more tightly-coupled monolithic kernel. That's just one of the architectural trade-offs involved here.)
As I see it, there are technical (e.g. hardware drivers, precompiled proprietary binaries) and social (e.g. relative lack of QNX expertise = $$, proprietary licensing) reasons for many people to choose one of the more popular OSes, running monolithic kernels.
I can't say what's technically superior, but even if QNX was, nobody's a dumbass for choosing something else -- and I don't think the fellow you're replying to was saying so. There are, of course, reasons and trade-offs.
An OS's adoption is a social thing, and proves nothing technical about it. If it wasn't for licensing (a social problem), BSD might have taken off, and Linux been comparatively marginalized.
Just sharing my perspective here.
You're getting all red in the face using some really dubious arguments to back you up here.
For some definition of "widely".
And feel free to stop adding personal attacks to your comments. They do not enhance the credibility of your posts.
And I know that you didn't claim they are mainstream, so we may be quibbling about where we draw lines around the word "widely". But...
What's the installed base of systems running QNX, say? (Throw in the others if you wish.) Estimates are acceptable, too, if you don't have hard numbers.
It's not only worth looking how many, but what. They're in vehicles, medical devices, industrial automation, military and telecom. Those are all areas where blunders lead to loss of lives, not just annoying downtimes. Insofar as infotainment and telematics is concerned, they estimate at 60% of 2011, so it's likely your car runs QNX.
The design wins, of course, should be obvious to anyone willing to do a modicum of research.
To clarify: Windows is, by any definition, both "mainstream" and "widely used". Yet it has very few "design wins". Therefore, the argument that cars are "only a few design wins" cannot be used to say that QNX, say, is not widely used or mainstream, since Windows is obviously mainstream and widely used.
> It's hard not to be on the offensive when you seem to beg for it.
You need to re-calibrate your sensitivity. You seem eager to take offense at nearly everything. Very little of it is worthy of your outrage.
Not that that changes the core fact that Apple is shipping L4.
My argument is that there's lots of great-in-theory but untested-in-practice stuff in academia, and that you can't discount something altogether just because it's untested. It's hardly fair to compare the output of a few grad students over a few years with all of the effort that goes into a major industrial product.
And anyway, the architecture of Linux originated in academia too.