
Exterminate All Operating System Abstractions (1996) [pdf] - vezzy-fnord
http://www.cs.berkeley.edu/~brewer/cs262b/hotos-exokernel.pdf
======
jws
This is one of the first papers introducing exokernels. The paper argues for
breaking the shackles of OS abstraction and giving direct, (but mediated),
control to the programs.

I like the idea, and am experimenting with it in my own OS. In the mainstream
world, we see this paper's goals partially appearing with the people who give
a network card directly to a server process in Linux.

If they said which architecture there were thinking about, I missed it. On x86
without IOMMU I can't think of any card I would trust. IOMMU implementation is
so spotty I'm not sure I'd trust it either. There is a great deal to be said
for keeping the device access locked away in the kernel where you know which
features of the card you will use, and if it has a mode that lets it scribble
elsewhere in memory, so aren't using it.

~~~
ykumar6
Yes, but do software companies really care about "giving direct, (but
mediated), control to the programs".

What is the trade-off between that, and the added complexity in understanding
all the different ways you can program something?

IMO, developers want a simple and standardized model to build applications on
operating systems so they don't have to care about the internals.

This article represents a marginalized view by the larger industry as a whole.

~~~
dang
When the authors talk about applications having their own OS, I think they
have in mind infrastructure programs like databases and web servers: complex,
low-level programs that have domain-specific needs from the hardware and can
gain greatly by using it only in the way that they need to.

They obviously don't mean that every CRUD app should build its own OS, since
that would be absurd.

~~~
greenyoda
A database is a good example. For example, a database engine might benefit
from being able to read large raw blocks from a disk and directly optimizing
its own disk seeks, rather than using the common abstraction of a file as
being just a stream of bytes.

~~~
nickpsecurity
You're right: it was proven with the invention of object-based, disk storage
that kept things simple. Think NoSQL key-value stores vs common SQL databases
in performance. Eliminating complexity in the underlying mechanism while
allowing the app to decide how it's used can certainly benefit performance.

------
greenyoda
Pushing operating system abstractions into user-space was pretty much what
IBM's VM/370[1] operating system did (starting in the late 1960s!). VM/370's
control program (CP), which would be the "kernel" in today's terminology,
controlled the physical hardware resources. Each "process" running under CP
was a virtual machine whose operating system API was mostly just the IBM/370
instruction set (i.e., no abstraction above the hardware architecture). For
example, CP provided only a virtual disk drive that you could manipulate with
machine instructions like SIO (Start I/O) - the file system was implemented by
the code running in user space. You could actually run a different guest
operating system (i.e., a different set of OS abstractions at user level) on
each virtual machine.

The authors of this paper do mention VM/370 in their "Related Work" section
(page 5). They say their approach is similar, except that they "export"
hardware resources rather than emulating them, which is more efficient.

[1]
[https://en.wikipedia.org/wiki/VM_%28operating_system%29](https://en.wikipedia.org/wiki/VM_%28operating_system%29)

~~~
nickpsecurity
And KVM/370 was first to do it with some security. Unfortunately, people need
ACM/IEEE membership for those papers. At least VAX VMM Security Kernel papers
are still around. Pretty much anything Karger wrote will help new efforts. :)

------
joveian
"The poor reliability, poor adaptability, and poor performance of operating
systems would be acceptable if applications could just ignore the operating
system and implement their own abstractions."

My first thought was that it seems like we have this these days with VMs,
although that can still be fairly complicated to set up due to the way that
VMs reliably partition hardware resources. Skimming a bit further I see this
is in some ways the opposite of what they have in mind, which is to even be
able to save registers in a different "process".

I tend to think that operating systems don't provide enough isolation rather
than too much. I don't care too much about how they go about implementing
that. I can see the appeal of a more flexible OS implementation, but I don't
agree about what the OS should ultimately provide.

I wouldn't mind seeing default-available VMs that are treated more like
applications in terms of resource availability and scheduling. It seems like
this would at least partly address many of the concerns within a monolithic
kernel design.

Can anyone recommend other good papers on exokernels? I feel like I'm not
really getting it yet.

~~~
vezzy-fnord
The canonical paper is "The Exokernel Operating System Architecture", which
came out two years later by the same author:
[http://u.cs.biu.ac.il/~wiseman/2os/microkernels/exokernel.pd...](http://u.cs.biu.ac.il/~wiseman/2os/microkernels/exokernel.pdf)

------
burnte
Now, correct me if I'm wrong, but haven't we essentially got the exact
opposite direction and strengthened the abstractions separating hardware and
apps? I would assume elimination of abstractions would reduce portability
unless suddenly the market moved to a unified hardware architecture, while
we've gone the other way; we have numerous platforms of hardware AND software,
and code is generally more portable than ever before due to the use of
standardized languages and APIs.

~~~
vezzy-fnord
Addressed in Section 4, Question 2.

~~~
jws
… in which the authors describe application level device drivers to map
between the physical hardware and a presumably common abstract interface. That
doesn't seem to change the world much over a kernel device driver mapping to a
common abstract interface, except that you can implement a different common
abstract interface if you wish. You'll just have to write new drivers for all
the cards. This is fine for experimentation, but not nice for production.

I think the ground shift between 1996 and now is the democratization of kernel
code. These mysterious "kernel architects" referenced in the paper who handed
down a system from on high are now any decent C programmer with an itch to
scratch. If you don't like the Linux/*BSD/whatever abstractions given to you,
then make your own. If people like it and it doesn't offend an influential
developer it can even be widely deployed.

~~~
vezzy-fnord
I disagree. It's much harder now to scratch your itch than ever before. You
can see it in how people like Linus Torvalds have an almost demigod status in
that kernel hacking is considered to be a highly arcane activity. This is in
stark contrast to the past, where writing operating systems was a much more
mundane and common activity, not unlike writing your own web framework today.

Certainly some shortcuts like DDE, rump kernels and emerging flavors of libOS
are now coming around, but it's still an uphill battle.

~~~
pjc50
_writing operating systems was a much more mundane and common activity_

[citation needed] - outside of academic environments. I suppose the embedded
environment might count as building your own micro-operating system (only one
task? don't bother with a scheduler etc).

~~~
nickpsecurity
Just look at the Wikipedia Timeline of Operating Systems with special
attention to number of distinct OS's in any given time period. You'll find it
drops off sharply as it nears the present if you count stuff that was actually
deployed or in working condition. Well, that was the situation when I looked
at it a year ago. Hopefully you come back with good, contrary news. :)

Another example is sheer amount of dead, search results I get for OS projects
on sites that track them. Almost all are before 2010 with most closer to the
late 90's. I'm not sure why this is but there's just hardly anyone doing it
anymore. Maybe it's because they google it and see how hard it is with all the
dead projects. ;)

------
ykumar6
Abstractions are important as they reduce complexity, and simplify operations.

The foundation of computing is based off abstraction - using layers and
interfaces to hide complexity, so developers can focus on higher-order
problems.

This article argues against abstractions citing security, performance and
cost. But time and again it has been shown that most costly component in
software is human time - and simplifying the underlying architecture is worth
the trade-offs

~~~
userbinator
_Abstractions are important as they reduce complexity, and simplify
operations._

...but only when used correctly. Abstraction is a means to an end, not the end
itself. Unfortunately, years of CS education seem to have taught most people
that it's the other way around, causing massive increases in design complexity
that are only justified by dogmatic adherence to "more abstraction is better".
Some programming language communities are more disposed to this effect than
others (e.g. Java.)

Correct application of abstraction is not common in mainstream software, and
rather difficult to describe, but the simplicity and clarity is unmistakable
when one encounters it. The occasional articles on HN about seemingly
impossibly tiny programs are good examples.

This article is a bit "X considered harmful" reactionary but I see their point
- often, software today is on the side of _far too much_ abstraction.

~~~
ykumar6
" Unfortunately, years of CS education seem to have taught most people that
it's the other way around, causing massive increases in design complexity that
are only justified by dogmatic adherence to "more abstraction is better"."

Abstractions only create design complexity when they are applied incorrectly.
Abstractions should scale horizontally across a layer of the software stack
(VMs, Storage - NFS, APIs, etc). If you're create a single-use abstraction,
it's not really an abstraction but a complexity

~~~
vezzy-fnord
NFS is pretty much a canonical example of a poor abstraction.

------
javajosh
Very interesting: this is directly analogous to something that happened in
Java: people started out writing native code to conform to a standard Java
interface. They ended up writing Java that mapped to the native code very
closely, and then abstracted all of those implementations _in Java_. SWT is
the poster child for this movement, but it happened in other places too.

------
cpeterso
One of the paper's authors, Dawson Engler, went on to found Coverity.

------
nickpsecurity
One of the few times they learn from the past (VM/370) with good results.
Moving along that spectrum, they'll security-related projects that did this
while enforcing a security policy on the processes or guests. KeyKOS
nanokernel architecture is one. VAX Security Kernel is another more like what
people do today. Start with most robust design, then optimize from there.

------
daxfohl
Seems similar to the direction DirectX has taken lately?

~~~
Rusky
DirectX and Vulkan are perfect examples of the exokernel philosophy. Another
good example is the evolution of X11, culminating with Wayland, where
applications draw the UI themselves using libraries instead of sending
primitive draw commands to a server.

