
Microkernels are slow and Elvis didn't do no drugs - vezzy-fnord
http://blog.darknedgy.net/technology/2016/01/01/0/
======
asuffield
The main issue I have with microkernels is that they don't solve any problem
that I have.

A robust system is designed such that any machine in it can randomly crash.
Crashing machines then only represent a risk that needs to be mitigated by
having enough spare capacity to handle however many machines are expected to
be down at once. Linux is already stable enough that this amount is too small
to care about; making it more stable doesn't help me.

Would I expect meaningful performance improvements from microkernels? I'm
doubtful that you'd get as much as 10%, but I'd be interested in seeing
evidence if anybody has any.

Do microkernels offer any application-visible features that I don't have today
with Linux? To the best of my knowledge they do not.

~~~
geofft
Rapid development. There's probably things you'd get out of the latest Linux
kernel's networking stack or video stack or btrfs driver or something, but
your upgrade option is all-or-nothing, and you certainly don't to upgrade all
of it. A microkernel could let you upgrade just the component you want to try
a newer version of; it also means that there's a larger base of people like
you testing the latest versions of each component, so the kernel improves more
quickly.

Of course, this presupposes things about API compatibility/stability between
microkernel components that may not be true in all microkernel designs.

~~~
asuffield
It sounds like a really cool idea, but to some extent we already have this
with Linux out-of-tree modules - it doesn't work very well, but it's there,
you can build a new version of the module, unload the old one, and load the
new one. I suspect the microkernel version of this would be hard to implement
for most of the same reasons, and probably fail in most of the same ways,
which boil down to "APIs change a lot so things don't stay compatible for
long".

------
nanolith
One thing that is much easier to do in a microkernel than in a monolithic
kernel is build a machine-checked proof of correctness. This is not to say
that this cannot be done in a monolithic kernel, but the proof would have to
extend to each device driver and service within the kernel, or at the very
least, a large portion of the kernel would need to be disabled to bring the
proof down to something that can be accomplished by a team of developers in a
reasonable period of time.

A complete proof of correctness in a microkernel based operating system can be
done in parts. First, the microkernel itself can be formally verified. Then,
the communication mechanism between components can be formally verified. Then,
each service or driver can be formally verified with regards to the
communication mechanism, and its own contribution to the system as a whole. As
long as it can be proved that the system will still operate correctly with one
or all of these services or drivers in an invalid (crashed) state, then some
level of operation can be guaranteed. In a correct context (fault tolerant
hardware, such as a well built embedded system), such a microkernel
architecture could be designed, and proven, to be robust.

The formal verification of seL4 is one example of this in practice.

Now, could one do the same proof-by-parts with a well designed monolithic
kernel? Probably. However, the stronger the division is between parts, the
more like a microkernel the monolithic kernel would become, until it would
make sense to take advantage of IPC mechanisms between the parts to simplify
the mechanical proof. Hence, we sort of come full circle.

While I appreciate both sides of this debate, I would rather run the risk of a
slower but more correct system than a faster system that is harder to formally
verify. This is not to say that all microkernel based operating systems are
more robust than monolithic kernel based operating systems, but there is a
growing subset of microkernel based operating systems that have been formally
verified, and I'm quite happy about that.

~~~
lobster_johnson
Formal verification is one thing, but being able to write a test suite for a
device driver where you simply mock a small kernel API must be a game-changer.
Not to mention that a formal protocol with a small surface area should make it
easier to build OS-independent drivers.

~~~
nanolith
Certainly, having a well-defined interface makes both empirical testing and
portability easier. Microkernels get this by default. To be fair though,
modern monolithic kernels have pretty decent abstraction mechanisms that do
make it easy to test drivers. Hypervisors and fast emulators have dramatically
changed operating system and device driver development, removing a lot of the
pain in this process.

I remember testing device drivers on bare metal because we had no choice. Now,
I can fire up a GDB server in KVM and step through code. Monolithic or
microkernel, testing and debugging is so much easier now than it was even 15
years ago.

------
ChuckMcM
A friend of mine in this discussion (microkernel vs monolithic kernels)
discussed it this way;

"CPU architects design for monolithic kernels because they are the only type
that have been commercially successful, further in the absence of any barrier
to making a choice (micro/mono) the monolithic kernel choice has always
prevailed. This may be a consipiracy on a monumental scale but it also may be
that microkernels just don't work as well as their proponents would wish, and
no amount of engineering investment has changed that in 20 years."

Ok, so that probably isn't word for word, but he said it in 2010, and here it
is 2016 and its still true. I love the conceptual simplicity of micro kernels,
they are easy to abstract and they are easier to show they have predictable
response in a wide variety of situations. I have also met some really strong
advocates for them, and listened as the QNX folks went on and on about how
much they were investing in making these things better than any alternative,
and yet here we are. I haven't attended a SOSP[1] symposium in a while but I
do still look through their proceedings when I get a chance at the Stanford
Library. And it hasn't felt like microkernels have really made any conceptual
improvements in their design. Instead we are seeing more and more work going
toward multiprocessing with a monolithic kernel on each processor and the
thread separation being an interconnect rather than a memory bus.

[1] [http://www.sosp.org](http://www.sosp.org)

~~~
aidenn0
TFA specifically lists several successful commercial microkernels (disclaimer:
I work for a company that makes one of the kernels listed).

In addition, there are a lot of CPU features that are incredibly useful for
microkernels that have been created both in embedded and non-embedded parts.

~~~
shenberg
What's common to Symbian, QNX, car-infotainment and in-flight entertainment
(system's I've interacted with)? In all those cases (QNX->blackberry) they're
systems with awful input lag from my personal experience. I also noticed that
performance was mentioned in the article only in terms of throughput.
coincidence?

~~~
jschwartzi
It's possible to write a UI on QNX that doesn't have awful input lag. Doing so
requires you to write software that prioritizes the UI, and to create a UI
that runs in short, deterministic bursts. I have some experience with this
because that is exactly what I'm doing in my current job, and with QNX to
boot.

~~~
aidenn0
That sounds interesting! A lot of UI libraries want bursts of CPU time that
are longer than acceptable in many real time systems, despite the amortized
cost being very low.

------
ris
Wow, so this is what it's like to be misrepresented. Firstly, there is no
"microkernel hatred" on my part, just hatred of pop-opinions of them being a
100% superior technology for all cases.

And my second comment follows this theme as it was pointing out a
contradiction in the poster's argument of microkernels simply being superior.
I fully understand the rationale behind running Linux in a microkernel, but it
isn't something that reinforces the argument that monolithic kernels are "just
inferior".

In short, I'd be a lot more impressed if this dude spent the effort he used
scouring the internet for quotes building something where he could just _show
us the code_ of a microkernel outperforming a monolithic kernel. Enough of the
dogma.

(Oh, and note that at no point do I claim to be a kernel developer either)

(Oh, and I don't call anyone "fags" either)

~~~
vezzy-fnord
_Firstly, there is no "microkernel hatred" on my part, just hatred of pop-
opinions of them being a 100% superior technology for all cases._

That's not a "pop-opinion". The popular opinion is exactly the opposite:
microkernels are mostly a curiosity at best (if not a travesty) and monolithic
kernels do 99.99% of what you want.

You're blinded by your microkernel hatred.

 _I fully understand the rationale behind running Linux in a microkernel, but
it isn 't something that reinforces the argument that monolithic kernels are
"just inferior"._

It isn't meant to reinforce such an argument.

 _In short, I 'd be a lot more impressed if this dude spent the effort he used
scouring the internet for quotes building something where he could just show
us the code of a microkernel outperforming a monolithic kernel. Enough of the
dogma._

Damage control. Those _damned_ quotes from those _damned_ academics, they
don't mean a thing! All that matters is my stars on GitHub!

 _(Oh, and I don 't call anyone "fags" either)_

It should be obvious I was paraphrasing.

(Anyway, it's unsurprising to see you have no concrete rebuttal whatsoever and
have to resort to deflecting.)

~~~
ris
"That's not a "pop-opinion". The popular opinion is exactly the opposite:
microkernels are mostly a curiosity at best (if not a travesty) and monolithic
kernels do 99.99% of what you want."

Nope, I'd say the pop opinion is against anything that is described as
"monolithic". The word "monolithic" itself is pretty much a pejorative in most
circles these days.

"You're blinded by your microkernel hatred."

You're unbelievable. Clearly you must know me better than myself.

"It isn't meant to reinforce such an argument."

That was the whole point of the thread. That was your original principal
statement (or implication anyway) which I took issue with.

"Damage control."

What damage?!

"Those damned quotes from those damned academics, they don't mean a thing!"

Those _damned_ results from _damned_ reality... If there's one thing I would
be _certain_ of about microkernels, it's that someone would be able to collect
a lot of quotes from academics supporting them.

"(Anyway, it's unsurprising to see you have no concrete rebuttal whatsoever"

No! I am not going to spend hours assembling dogma! I have real things to do!

"have to resort to deflecting"

Look buddy, you're the only one who has made personal attacks and assume all
kinds of things about me like this weird "hatred" thing and that I care about
GitHub stars (wtf?).

Though I'm glad I seem to have ruined your xmas period with my few comments to
the point you've spent time assembling a long page of things that have been
said before which completely miss my point.

~~~
hyperpape
Wow, both of you need to stop flaming each other and say something valuable.
No one should care about your line by line refutations of each other (though
I'm sure some will make the mistake of caring).

What matters if what you can teach us about microkernels vs. monolithic
kernels. The original "Microkernels are slow and Elvis didn't do drugs"
somewhat does this, though the presentation as a series of lengthy quotes and
the dismissive presentation of viewpoints (describing opponents as calling
people "fags") could use some real work. But all three messages above this one
are a waste.

------
Animats
Good IPC requires tight coupling between the IPC mechanism and the scheduling
mechanism. In QNX, you call MsgSend, which sends a message and blocks. On the
receive side, a process waiting in a MsgReceive gets the message and unblocks.
The fast path for this, when there's a thread waiting for a message, does not
go through CPU dispatching or CPU switching on a multiprocessor. This is
crucial to performance.

You do pay a penalty for copying. For short messages, it's low. For anything
that fits in cache, that you just created and will use immediately on the
receive side, it's very low. Really big messages may add 10%-20% overhead.
This is the main problem with drivers in user space, because they don't do
anything with the data, they just move it around.

Incidentally, copying under QNX is preemptable by high-priority tasks. There's
a hard upper bound on the longest uninterpretable kernel operation, and it's
measured in microseconds. That's why you can use QNX for hard real time.

Contrast what happens on Linux. You have a few choices (System V IPC, pipes,
sockets), but they all work like I/O operations. You send something, and you
don't block when you do. If there's someone waiting on the queue or pipe,
their thread becomes active, which means a trip through the scheduler to get
them going. Since the sending thread isn't blocked, if there's a free CPU, the
receiving thread starts up on a different CPU, which means lots of cache
misses. The sending thread probably blocks shortly after the send by reading
something or waiting for a message, but the OS doesn't know that's going to
happen. So when it does, there's another trip through the dispatcher for work
to do.

In a compute-bound environment, this may mean that each IPC operation sends
you to the end of the line for CPU time. Trying to use IPC operations like
subroutine calls is painfully slow in such systems.

Write something that does short IPCs and waits for replies, and benchmark it.
Then add some compute-bound tasks, enough to keep all the CPUs busy. If the
IPC task performance drops by orders of magnitude, the IPC mechanism is badly
designed.

L4 is very impressive, but I think they went a little too far by using shared
memory as the main interprocess communication mechanism. This gets copying out
of the kernel, but means that one process can screw up the format of the
memory region it shares with another process. The OS may be robust and secure,
but attacks on the process you're talking to by messing with pointers in the
shared area may be possible.

The big problem with QNX is that it costs money. Linux is free, and Windows is
basically given away with hardware and trialware.

I hope that someone writes a QNX-like microkernel in Rust. We need that.

~~~
aidenn0
I know it makes me heterodox in the microkernel community, but I like the
choice of shared-memory there. There are well known data-structures that work
with shared memory with well-understood behaviors with a malicious writer.
They are tricky to get right, but only need to be implemented once, verified
and reused many times

Furthermore, as soon as you are communicating with another process, you have
to deal with receiving garbage if that process misbehaves, so it's not all
sunshine and roses even if you use a more traditional IPC mechanism.

~~~
filereaper
"There are well known data-structures that work with shared memory with well-
understood behaviors with a malicious writer."

I'm very interested in learning about these shared memory data structures,
would you mind providing a few links or mention some papers I could read.

We implemented a real-time microkernel in university where QNX has its roots
[0]. It used traditional copy based IPC however. I'd like to see how these
shared-memory data structures could have been used.

Thanks.

[0]
[https://www.student.cs.uwaterloo.ca/~cs452/](https://www.student.cs.uwaterloo.ca/~cs452/)

~~~
aidenn0
I'm actually struggling to find this in the literature, but I've seen it used
more than once.

Here's the most simple version:

Minimum two pages, each readable from both sides and one is writable from each
side. You need this even for a single-directional queue. Have a simple
post/ack counter that is modulo something much larger than the size of the
queue (i.e. an unsigned long on most architectures). If (post - ack) is ever
greater than the size of the queue, the queue should be considered no longer
safe to use; both readers and writers should check this anytime they access
these values. Otherwise there are (post - ack) bytes pending in the queue.

To write, write bytes at (post MOD size) and increment post; to read, read
from (ack MOD size) and increment ack.

That is a simple one-to-one byte-stream queue; fixed-size message queues are
an obvious change, non-fixed size message queues are significantly more
complex, but proceed from the same idea. Many-to-one queues are a lot more
complex, and one-to-many is more complex if you want reliable data transfer,
but much simpler if you don't (just eliminate the ack completely)

If all you care about is throughput and you have more than 2 cores, then just
spinning on post/ack is sufficient. Otherwise some synchronization primitive
is needed.

On many real-world architectures, particularly SMP ones there are a lot of
fiddly details to get right with the MMU &ct. but the underlying idea is
simple.

------
gillianseed
In the most recent talk I saw with Tanenbaum regarding Minix and performance,
he mentioned a 20% decrease compared to monolithic kernel based systems, which
he himself was happy to sacrifice for the greater system stability it offers.

Of course, the big standout for me has always been that if micro-kernel based
systems of today are just as, or close enough in terms of performance, why
aren't we getting any benchmarks showing this off ?

It would be fantastic promotion to show your system running against Linux or
the BSD's and performing as well or near as well in heavy workloads while at
the same time offering the great stability of typically being able to restart
modules should they crash and thus not take down the whole system.

The micro-kernel vendors should be shouting it from the rooftops, 'here are
the benchmarks!', but they don't as far as I can tell.

The last time I saw a modern micro-kernel vs modern monolithic comparison was
probably this one way back in 2007, and in this Minix 3 did not do very well
against Linux.

[http://lwn.net/Articles/220255/](http://lwn.net/Articles/220255/)

~~~
nanolith
Well, most microkernel vendors have niche markets that aren't directly
competing against Linux. Minix 3 is a bit of an outlier. Although it is a
desktop and "server" OS like Linux, it's really meant to be more of a teaching
OS than anything else.

If you want to compare microkernel operating systems, look at QNX, L4, Windows
CE, etc. These are used in embedded and RTOS spaces where performance is
critical. They seem to hold their own pretty well, and running in systems
where Linux would not be a good alternative due to size and porting cost.

~~~
Gibbon1
In a lot of these cases a small foot print and guaranteed latency is vastly
more important than raw speed. As in with a task that runs 20 times a second,
it's not important that it finishes in 1 ms or 3 ms, but having the system go
away for 500ms is not acceptable.

~~~
nanolith
I guess I should have clarified that "reliable and predictable" performance is
critical. In practice though, these systems have fast enough context
switching.

I agree with you that "fast enough" is the only metric that matters here.

------
SwellJoe
I suspect one of the "toy" operating systems being written in Rust (or some
other safe/modern systems language) will end up being the Linux of 20 years
from now. So, it seems relevant to see how this particular holy war plays out
(once again). Linus nixed the idea of a microkernel for the dominant operating
system for building the web while it was still a toy...I don't know that he
was wrong to do so, but it might be interesting to try a different path for
the next generation.

~~~
cvwright
Wouldn't that be cool?

Another neat idea would be to build on an existing microkernel like seL4, but
write the userspace -- all the drivers, memory manager, filesystems, etc -- in
Rust. The NICTA team already has proofs of correctness for the microkernel
itself, so why reinvent the wheel? But at the same time, those proofs were
super expensive and difficult to make. It would be awesome if we could get a
reliable userland more easily by using a better language.

I wonder what it would take to get Rust compiling for seL4?

~~~
cmrx64
You can already use Rust on seL4, today! It's not ready for public consumption
yet since there are no docs, but I've been working on it a lot recently and
hope to have a release of the foundational libraries sometime in the next two
weeks.

There's a mailing list, if you want to watch:
[https://lists.robigalia.org/listinfo/robigalia-
dev](https://lists.robigalia.org/listinfo/robigalia-dev)

And the (still in-progress) source code:
[http://gitlab.com/robigalia/sel4-sys](http://gitlab.com/robigalia/sel4-sys).

I have some ideas about encoding RPC protocols as DFAs in Rust's type system,
and then doing verification of the protocols in a more abstract setting
(perhaps NuSMV or a similar checker). This is somewhat similar to the (ancient
at this point) "protocol compiler" that used to be in Rust, doing a similar
thing for channels.

I'm interning at Data61 (née NICTA) on the verification team starting Jan 18,
and I'm hoping to transfer some of that experience and knowledge into
verifying Rust.

~~~
nanolith
I would love to see some blog posts regarding your experience during this
internship.

I am working on some formal verification myself for some commercial projects,
and I enjoy reading about stuff like this.

------
geofft
I believe it is in fact correct that, at least as far as the market was
concerned, one of the big reasons for the failure of microkernels in the
mid-'90s was RPC overhead.

But it's no longer the mid-'90s, and everyone's been working on super
efficient RPC (Cap'n Proto, msgpack, BSON, etc.) for unrelated problems. I
have been suspecting for a while that a brand-new microkernel based firmly on
mid-'10s technology (and targeting mid-'10s processors) would be perfectly
performant.

~~~
toast0
I don't have a lot of experience with microkernels, but I would imagine the
objections due to communication are not a result of serialization of the
communications; they would tend to be C structs, more or less, not a terrible
XML document, creating them wouldn't be a significant performance issue,
although actually doing the IPC may be a performance problem (or may be
perceived to be).

I think the market issue for microkernels is that

a) there aren't a lot of choices for microkernels that are 'production ready',
today I could reasonably run a website on Windows NT, or OS X, but the Hurd is
perpetually not ready.

b) Monolithic kernels are 'good enough'. The benefits microkernels promise
don't really make a huge impact (or aren't perceived to). Yes, it would be
great if my filesystem driver didn't crash the whole system when it crashes,
but if it crashes in a microkernel, I expect I would still have to restart all
the processes that had open files, because the state is lost; and either way,
I'd rather not have my filesystem driver crash. Most users of monolithic
kernels don't spend a lot of time developing or debugging kernel drivers, so
developer efficiency there doesn't really pay off. Most of the problems I
track down to a kernel issue are things that could be equally present in
either style kernel: bugs/inefficiencies in the tcp stack, stupid disk
controller resource leaks, etc. Maybe it would be easier to track them down if
the tcp process was clearly spinning on all the cpu, or the disk controller
process had a ton of ram/resources assigned to it, but it's not that hard to
track things down anyway -- figuring out why they're broken is much harder
than figuring out where they're broken.

~~~
collyw
"but the Hurd is perpetually not ready"

People used to say the same about Perl 6! :)

------
TheCondor
TRON is a microkernels now? Interesting...

I think there is a more pragmatic way to understand the microkernels issues.
Performance is but one and over the decades it has become less of an issue,
but hasn't vanished which should be noted, you usually have to out do the
technology you want to push out. Have you ever modularized something too much?
It is a meme here, abstractfactoryfactorybuildersingleton. You break things
down in to simple singular purpose components that are easier to build
correctly and reusable, then all the sudden you have your favorite Java or
.net framework with abstract factory factories.

Take jboss, for example, great code. High quality code. It really works.
Complex as all fuck to start using. Then when you want to understand something
like jboss serialization, it's spread among a dozen classes in a way you'd
have never imagined when you started thinking about it, you just want some
function to look at the implements their algorithm. It didn't get that way by
accident or even stupidity, it's too reusable. Or rather, their extreme
modularity comes at a cost, maybe it's worth it but I don't see a lot of
people bragging about a new jboss based whatever here...

Apply this to you operating system and ask yourself why. Why does netbsd port
so well to new hardware? And why do people do that? Why is it more difficult
to write a posix service on top on a microkernel than a monolithic one!
Surely, it's a smaller chunk of code, it should have less complexity, be
easier to write, right? It's not, just look at Hurd. The cost is the price of
the modularity, your posix service needs to rely on other services, those
dependencies need to be coded for, faults handled, etc. what does posix open
do when the filesystem service is not responding? (Hint, it depends on the
type of fault and whether or not it's recoverable, or you think it is)

They should logically be easier to build and because of that we should be
crapping them out left and right. Plan9 should be micro kernel. With the
current leaders in operating system semantics, posix and Windows, the modular
implementation is more complex than the monolithic one.

There are a bunch of good ones out there, just pick a microkernel and start
building a good BSD or Linux drop in, should be easy, right? Maybe if you use
it as a hypervisor...

------
nwmcsween
I never understood why exokernels stalled out; the idea of a kernel simply
being a hardware multiplexer sounds great. IMO the problem that will always
plauge microkernels is the overhead of context switching and from a technical
standpoint it just seems like a waste of cycles considering the possible
alternative. The alternative I would want to see is an exokernel with a data
driven api, possible batching, some form of scheduler activations and proof
carrying code to allow loading of 'modules' from unprivledged userspace. The
issues would be 'syscall' time could vary greatly depending on what was
requested and logic to optimize what a call actually has to do would have to
be within the kernel.

~~~
Rusky
In addition to nostrademons' point about virtualization, exokernels have
influenced monolithic kernels in particular situations. For example, the Linux
kernel's graphics stack is moving away from rendering through the X server,
towards directly rendering into a buffer using libraries (Wayland is only the
most recent step in this direction).

------
bluejekyll
This is an excellent write up. I've often wondered why Linus never moved
towards a microkernel. I read all of these same debates, and they always
seemed short sighted.

Given the recent trend towards containers, it's obvious that people can now
see how much better a model this is for micro-services.

It's just a matter of time before we start seeing more microkernels to run
things like unikernels and such. Docker is just a stepping stone.

~~~
Rusky
Unikernels don't run on microkernels, they run on hypervisors, AKA exokernels.

The difference is that microkernels move abstractions like the file system
into server processes that still have to be trusted, but are at least
isolated; exokernels and hypervisors remove the abstractions from the trusted
code base entirely, and just multiplex the hardware directly. Unikernels thus
simplify the typical virtualization stack so it looks more like MIT's
exokernel research projects, with abstractions in untrusted libraries.

------
mcguire
* [http://tech-insider.org/linux/research/acrobat/960112.pdf](http://tech-insider.org/linux/research/acrobat/960112.pdf) See the Byte Benchmarks Results on page 12 for a laugh. (I mean, really. That is _the_ funniest performance benchmarking chart I have ever seen, in half of a career that involved a lot of performance work.)

I can personally vouch for the 2x performance slowdown using (a derivative of)
the Mach microkernel under IBM's WorkplaceOS project.

* [https://www.usenix.org/legacy/events/osdi99/full_papers/spat...](https://www.usenix.org/legacy/events/osdi99/full_papers/spatscheck/spatscheck.pdf) figure 8. Note that Scout is a microkernel that does _no_ memory protection, Linux uses two domains (user space/kernel space), and the "Accounting" version of Escort/Scout uses 4 or 5. Performance is roughly proportional to the number of protection domains.

------
grok2
One of the problems with micro-kernels is that everyone re-invents
abstractions over the basic kernel services. Micro-kernels might be useful in
some subset of cases, but it is still useful to have an OS layer above the
micro-kernel at which point maybe the micro-kernel is not so useful after-all
as a general purpose OS mechanism.

------
monkmartinez
I would like to read the article, but my eyes do not appreciate white text and
black BG web pages. It is incredibly jarring and almost painful once you go
"back" to the "normal" web. I will stick with the comments for now I guess.

~~~
geofft
In Chrome: right-click something on the page, click "Inspect element," click
the link to style.css that shows up, select all and backspace. You actually
get some fairly nice colors.

(This comment is not intended as an opinion in either direction on whether the
original color scheme is reasonable or whether you doing these steps is
reasonable.)

~~~
MBCook
The problem is things like that can't be done on my iPad. In such cases I
usually rely on reader mode but for whatever reason it's not available for the
page.

White-on-black text isn't normally that bad, I think it's the choice of thin
font that makes it especially hard to read in this article.

~~~
Timethy
Hi MBCook the reader view works on my iPhone for this article.

~~~
MBCook
Interesting. I wonder if using 1Blocker somehow effects it.

~~~
MBCook
Remembered this so I decided to test it.

Sure enough loading w/o 1Blocker enabled reader mode.

Very interesting.

------
platform
I expect micro kernels will be more prevalent in the future. Hardware
platforms will have to evolve to have more physical independence and
redundancy (both by replication, and different-vendor components assigned to
have same functions). I am of a view, that micro-kernels are better positioned
to manage those types of hardware platforms

------
jeffdavis
This discussion would be more interesting in the context of major new
developments in OS kernels.

Microkernels are basically solving an engineering problem. That's exciting
when engineering is the bottleneck holding up a bunch of really innovative
ideas.

So what are those amazing kernel ideas being held back by engineering
difficulties?

------
bitmapbrother
And now for the opposing view on why Microkernels suck:

[https://www.kernel.org/doc/ols/2007/ols2007v1-pages-251-262....](https://www.kernel.org/doc/ols/2007/ols2007v1-pages-251-262.pdf)

------
TazeTSchnitzel
QNX was the core of BlackBerry's OS on the PlayBook and newer phones.

(Also, isn't OS X kinda sorta sitting atop Mach? I think it's impure or not
even Mach these days, though. EDIT: Ah, XNU is a hybrid which adds kernel-mode
drivers and BSD.)

~~~
cmrx64
The "Secure Enclave coprocessor" on the iPhone runs a modified L4.

------
nqzero
#11: microkernel advocates use seizure-inducing black backgrounds with white
text

~~~
derefr
I'm pretty sure nobody had a seizure staring at a DOS prompt; and that "white
on black" is one of the settings in iBooks, Pocket, etc. for a reason.

------
fleitz
Microkernels are worse than slow, they don't even boot.

------
kmicklas
Microkernels seem to be solving all the right problems (namely, abstraction
and safety) in all the wrong ways.

When you realize that kernel-mode vs userland is a social construct and not
inherent, you see that the microkernel is just attempting to recover safety
from the processor's built-in paging and protection capabilities. The correct
solution of course is just to use a memory safe language.

Likewise, the microkernel encourages abstraction due to the pain of RPC
boilerplate, but modern languages and sound engineering practices should make
this easy anyway.

~~~
derefr
That doesn't solve the problem of running someone _else 's_ arbitrary pre-
compiled binary (whether that be an application or a driver) without letting
it crash your computer. Which is kind of _the one thing_ you expect an OS to
be able to provide you; otherwise there's no reason not to just let every
binary execute directly on ring 0 of the CPU.

(Though, I mean, you can _sort of_ get around this requirement if you only
accept "binaries" that are actually AST or high-level bytecode, and then
finish compiling them in kernel-space at module-load time. OSX and iOS could
get away with doing this soon-ish given that all new binaries submitted to the
App Store are submitted as LLVM bitcode. Microsoft could probably get away
with it for Metro apps as well.)

------
alexnewman
linux is practically a micro kernel when it comes to ease of development. What
people should be interested in is unikernels. OS is a crutch

~~~
alexnewman
Why did i get down voted for this?

~~~
alexnewman
Was i rude or impractical?

