
Linux Containers: Parallels, LXC, OpenVZ, Docker and More - tacon
http://aucouranton.com/2014/06/13/linux-containers-parallels-lxc-openvz-docker-and-more/
======
cmhamill
So, the question that jumps to mind reading this is:

At what point do we acknowledge that we're re-inventing Plan 9 poorly?

I don't mean to be glib (well, maybe a little), but all of this (with some
exceptions — zones and jails, mainly) feels incredibly hacked-together.

Perhaps someone paying more attention could tell me if, say, the folks working
on the Linux kernel are learning the lessons from Plan 9? kernfs seems
promising, at a glance, but I haven't really looked into it.

~~~
api
Outside of build and test systems and extreme legacy software support,
virtualization itself is in the category of things that should not exist.

I should be able to put up a server and sell accounts on that server to the
general public. People should be able to log in, install and run services,
etc. The box should not get instantly pwned, and people should be able to use
it without interfering with each other. _All_ resources -- disk, memory, IP
addresses, bandwidth, etc. -- should be able to be assigned to users and/or
user groups and managed by the box's true super-user(s).

You can't do that because Linux/Unix _and every other OS_ has a woefully
incomplete and outdated model around things like security, user and group
management, software installation and library organization, quotas, and
privilege isolation.

The popularity of virtualization and containerization are an admission that
OSes are broken. While they are "multi-user" in a 1970s computer lab sense,
they are all fundamentally single-user OSes from a modern perspective. There
are no true multi-tenant OSes on the market; virtualization is an ugly hack to
make single-tenant OSes host multiple users.

Nevertheless I think it's a situation we're stuck with due to the massive
legacy software investment we have in these platforms. Plan 9 is full of cool
ideas but nobody uses it because nobody uses it, and nobody will use it
because nobody already uses it.

The _only_ way I can see this situation getting better is if someone were to
put _serious_ money behind a really well-engineered alternative. But I don't
see that happening because there'd be no profit in it. OSes are now in the
category of things everyone expects to be free, so there is no longer any
incentive to invest in them.

~~~
skywhopper
Nothing as complicated as an OS capable of running modern software will ever
be "well-engineered". Unix/Linux systems are the culmination of over 40 years
of work by thousands of people across decades on tens or hundreds of millions
of lines of code. You can't replace all of that no matter how much money you
pour into it. Most of the lessons about what's good and bad in the history of
engineering Unix are baked in to the existing code, and no humans actually
know any of the reasons why it works as well as it does, if any ever did. A
new platform may fix some issues, but it will encounter new ones, and it will
inevitably encounter many of the old ones as well.

These systems appeared hacked together because that's the only way real-world
systems work. The _only_ way to build a complex working system of any kind is
to start with a simpler system and add to it. But systems have minds of their
own that are beyond the control of any engineer contributing to the project,
and anything big and effective enough to compete with Unix/Linux will have
just as many problems, if not more.

~~~
api
That is the standard conservative incrementalist position about systems. I
think it's a denial of the efficacy of conceptual thought.

It often stems from an analogy to biological evolution, but evolution is a
geological-timescale process that occurs over aeons. To use it as a guide to
cultural, social, and engineering progress is the naturalistic fallacy not to
mention a bit of a category error.

That being said -- I do consider the challenging of the "crappy old OS +
virtualization" paradigm unlikely due to the lack of a strong financial
incentive to do the work. The amount of work required is _waaaaay_ beyond
amateur open source hacker thresholds.

It's possible that this lack of a financial incentive betrays a lack of
overall value incentive. Maybe containerization + virtualization, while ugly
and ham-fisted, is "good enough" and a more elegant approach just wouldn't
have enough "win" to it. A similar situation exists with languages like D, Go,
and Rust vs C++. They're better, but they're probably not better enough to
displace the incumbent. Peter Thiel's rule on competition (from the incredible
book Zero to One) is that an upstart alternative usually has to be 10X better
to "disrupt" an established market. I can't imagine a polished-up Plan9-ish OS
being 10X better than Linux+Docker+KVM on important metrics. A new OS would
have to be 10X as productive to program, 10X less time consuming to admin, 10X
more efficient at the use of hardware, 10X more secure, or some combination
thereof that amounts to a 10X win.

------
voidz
I am not sure if '(Free) BSD Jails' should be in here, because it's not a
Linux container, it's something else entirely. Especially when you look at the
details. But having said that, this article does read as a nice overview,
especially for those who are still undecided with various options of resource
separation.

~~~
angersock
What are the advantages/disadvantages of using a BSD jail instead of these
other options?

~~~
nisa
Jails are fine for the most part I think. A problem is resources... FreeBSD
lacks something like cgroups - But I think besides disk io throttling every
other feature of cgroups is already there in FreeBSD but implemented in a
different way.

------
ademarre
A side point observed while reading this article:

> _Google + Containers_

> _Not (google+) but rather google using linux containers._

This is the kind of confusion that results when you name a product with a
dangling binary grammatical operator. Marketers really ought to stop doing
that kind of thing.

