
Introducing Network Containers - pkcsecurity
https://www.zerotier.com/blog/?p=490
======
bpolverini
This is bomb. Interesting to see someone borrowing a page from microkernels to
bring the networking stack into userspace.

Hopefully, technologies like this create a renaissance for protocol design. ZT
does the heavy lifting in terms of security and routing, and devs are just
left with a flat network that can be optimized for the particular application.
We've already had some pretty big wins using ZT to do this for encrypting
links between our own microservices -- it'll be even nicer when we can do this
at the container-level, without any additional Linux kernel hackery.

~~~
api
(Post author and ZeroTier founder here)

I thought about titling this blog announcement "Tenenbaum! Thou art avenged!"
but decided to stay away from the link bait. That and I hugely admire Linus. I
actually think they were both right.

For those who don't know, I am referring to the epic smack down that occurred
many years ago between Tenenbaum and Torvalds over microkernels:

[http://www.oreilly.com/openbook/opensources/book/appa.html](http://www.oreilly.com/openbook/opensources/book/appa.html)

I think Tenenbaum was right from a theoretical, academic point of view, but
Torvalds was right from a pragmatic ship-it-now point of view. He was also
right from an early 1990s point of view.

Linus was right because monolithic kernels are easier to build and faster for
monolithic system use cases. In the 1990s, individual commodity computer
systems were really not fast enough for extreme multi-tenancy beyond maybe
multi-purpose use within an organization. This is the sort of multi-tenancy
Unix was designed for: a box used by, say, a team or a university department.
For this use case a monolithic kernel makes sense for performance and
simplicity reasons. The advantages of microkernels just don't matter much,
since the box generally has one admin and all the users are trusted. Messing
with the kernel is not a big deal.

But then Moore's law happened. Now just about anything above a Raspberry Pi is
fast enough to host _lots_ of stuff. Big servers with dozens of cores can host
hundreds, sometimes even thousands of services. While they only take up single
racks, these machines are "main frames" in the classical sense and can easily
see diverse work loads belonging to multiple clients. It's incredibly wasteful
to single-purpose these machines and let them sit idle.

So far we've hacked around this with virtualization mostly, but virtualization
is heavy and ugly. Think of it as a thick microkernel that runs a bunch of
other whole kernels as services.

On huge machines the advantages of microkernels really shine. You can extend
the system without having 'root' in a global sense. Crashes of individual
system extensions don't bring down the whole machine. Everything is multi-
threaded to take advantage of lots and lots of cores. It's even possible to
run system services in jails where they only have access to the resources they
need so that a compromise (e.g. buffer overflow) in a driver does not permit
compromise of the entire system.

So in the end I think Moore's Law has turned the debate a bit toward
Tenenbaum. But Linux has so much momentum I doubt it's going anywhere. Instead
I think we will see Linux evolve toward being more microkernel-y in the areas
where this is most desirable. Network virtualization is one such area. Another
is security zones. Soon we'll have really secure containers, allowing Docker
multi-tenancy, etc.

But I wouldn't count out alternatives either. There's an opportunity here for
something like Hurd if they could ever get that thing really moving. (Get the
Hurd moving... huh huh huh...)

Edit: Torvalds was also right because Mach, at least at the time of the
debate, was pretty slow. Mach is still not the best microkernel design. See
this deck:

[https://www.cse.unsw.edu.au/~cs9242/04/lectures/lect05b.pdf](https://www.cse.unsw.edu.au/~cs9242/04/lectures/lect05b.pdf)

