

Microkernels vs. hypervisors (2008) - s-phi-nl
http://www.ok-labs.com/blog/entry/microkernels-vs-hypervisors/

======
KMag
At least back in the days of the first Android phones and the original iPhone,
smart phones needed at least two cores, and they dedicated one core to a real-
time microkernel to run the cellular radio and other to run Linux / Xnu and
the Android/iOS applications.

These days, do Android and iOS run paravirtualized kernels on top of the real-
time microkernel so that they don't have to dedicate a whole core to managing
the radio and other real-time tasks? Do they still dedicate a (smaller?) core
to the RTOS? Paravirtualization involves fewer changes than making an existing
OS Kernel into a hard-realtime low-latency kernel, but it's also possible that
modern Android and iOS phones use hard-realtime low-latency Linux and Xnu
kernels. Blackberry's QNX started out using a hard-realtime low-latency kernel
as the application kernel, so it's more likely blackberry is using QNX to
control the radio and other hard-realtime tasks instead of running a second
microkernel on the phone to handle the radio.

~~~
pronoiac
When I read "cores" I think "on the same chip," so that took some re-reading
to parse. Handling the radio is the job of the baseband processor, which has
its own RAM and firmware, in a separate package from the CPU. It looks like,
as of the iPhone 5s, it's still a separate processor. I think that the
vagaries of FCC licensing will keep it separate; otherwise, you'd have to re-
test it for interference whenever you rev'ed your CPU or perhaps even
firmware. I think most baseband firmware is proprietary and tightly
controlled; for the paranoid, it presents an attack surface that's very hard
to examine.

~~~
wmf
Basically only the iPhone has a separate baseband chip; most other phones put
the baseband on the same chip as the application processor.

~~~
count
I've not seen any phone with a baseband on the same chip (or at least, that
shared any of the resources of the primary cpu. FCC rules makes that, as the
GP mentioned, extremely tricky if possible at all. What phones have you seen
that do have a integrated baseband?

~~~
wmf
Any that use a Qualcomm chip.

As for FCC rules, in general it's safe to assume that the regulated are
smarter than the regulators. The baseband core is probably "logically
separate" or something to meet FCC regulations.

------
MyDogHasFleas
The real question is, what is the interface you are virtualizing, and how do
the characteristics of that interface influence the design of the OS you are
building?

Hypervisors virtualize the hardware interface: the user-mode and privileged-
mode instruction sets, the I/O ports and channels, the memory model. This is
good because that interface is stable and well-understood, allows for a high
degree of isolation between processes, and has a large number of applications
(operating systems) already written to the interface ready to run on the
hypervisor. This is bad because the interface is more low-level than one would
like as an API for programmers, requiring another layer (the OS or at least
kernel) to make it usable.

Microkernels, or kernels in general, provide a system-call interface that is
made up by the OS designer. This is good and bad in a mirror image sort of way
vs. the hypervisor. It is good because the interface is more suited to
programmers, providing OS level services rather than presenting as bare metal.
It is bad because the interface is less stable (evolving in software time
rather than hardware time), less provably or practically isolating between
processes, and has a smaller ecosystem of applications already written to that
interface (perhaps zero if it's brand new).

It is useful to view kernels, microkernels, and hypervisors as points on a
spectrum of virtualized interfaces. One can see hypervisors moving towards
kernels with virtual additions to the hardware interface, such as
paravirtualized I/O devices, special trap instructions to communicate with the
hypervisor, and optimizations such as page sharing, snapshotting, and live
motion.

Microkernels can also be viewed as moving to a smaller, cleaner system call
interface on the spectrum, with greater stability and isolation between
processes than a "big" kernel. But the issue of lack of applications written
to the interface remains.

The ascent of Linux containers hits a sweet spot on this spectrum IMO. Linux
containers virtualize the Linux system call interface, which has become
relatively stable, certainly has a large body of applications written to it,
and provides a "good enough" degree of isolation among its containers (which
keeps improving).

IMO the ascent of Linux containers is on a hockey stick curve right now (due
to Docker having simplified and standardized its interface and container
format/management) and will overrun any debate like this between hypervisors
and microkernels, for those of us not doing academic OS research. Virtual
machines will not go away, but will be used more for their original purpose,
to allow running completely different OSes on the same hardware, and to
provide completely isolated OS instances, rather than as general purpose
compute capabilities a la IaaS.

