These isolation features go back a really long time in various OS's, but only in the last few years have they seen mass adoption, which, alas is running a bit ahead of any effort to make it a formal standard or anything even close to that.
And so the way you would create a "container" on Linux is very different from the way you'd do that on FreeBSD or Solaris or Windows. Docker was one of the first pieces of software to facilitate that on Linux (where it was and still is far from trivial), along with the whole docker image dockerhub ecosystem, etc. But ultimately what Docker does amounts to the right commands (or system calls) done, and you can have the same effect without Docker (there's even a nice Docker talk on youtube somewhere that shows how to do what Docker does without it from the command line).
IMHO in the developer community at large few people care to understand the operating system intimately, and even fewer people know stuff like cgroups or the tc command on Linux, and that's primarily why Docker is so popular - you don't need to know any of it.
I think it'd be nice if people stopped using "Docker" and "container" interchangeably, and if a set of tools emerged which would be more part of the OS distribution and less a separate concept/brand/company/whatever and was more or less uniform across, at least the Un*x-like OSs.
Also containers really didn't get very secure until Google started upstreaming various patches and updates to the kernel's cgroups.
The way docker runs containers is very different than then the way (I assume) Google does it. In a strange twist a fate Google did a lot of he grunt work to make containers safe and useable but then a another tool came to market and made containers ubiquitous.
OpenVZ's containers—and they were containers, in every sense of the word—were already secure; what they weren't was implemented by a set of granular, reusable in-kernel primitives that served any goals other than that of "containerization." Instead, OpenVZ was a very "cathedral"-esque approach to Linux containerization: just one big blob of code with a complex API surface. Thus, the kernel refused to upstream it.
Google's contribution was mainly to clone the feature-set of OpenVZ by working on a series of small enhancements (to cgroups and kernel namespaces) that would each be a useful standalone feature, but would also coincidentally be composable to replicate the power of an OpenVZ container. In other words, to create an OpenVZ alternative that was mergeable.
(Side-note: although Virtuozzo is recently a standalone company, for most of its life it was a brand owned by Parallels. If anyone is to "blame" for Linux containerization becoming a thing everyone was interested in, it's probably them.)
Remember, much of the enterprise will start looking at docker maybe 2 years from now.
If we redefine "Unx-like" to "literally only linux", then we have "systemd-nspawn" which integrates well with the rest of the linux system and is a nice and standards-y looking way to run a container.
docker exec -it $container bash
I think I expected rkt to be fully OCI compatible in the future but it looks like Kubernetes itself needs to be able to interface with OCI runtimes and there's work to be done in that area? The Docker integration cuts too deep currently?
As for Kubernetes rkt support: it is usable today and continues to develop nicely. If you want to try it out on your laptop it is super easy with minikube: https://tectonic.com/blog/minikube-and-rkt.html
Overall, the rkt support is in a good spot, works today, and the team is working on how to make all of the corners of Kubernetes work flawlessly. And it will support all of the parts of OCI as that specification matures.
If people are interested in joining in and adding a runc backend to rkt that would be great. However, all of the pieces needed to support Kubernetes is already there so we haven't seen much reason to do it. Inside of rkt there is an abstraction called a "stage1" that enables swaping out of the actual containerization system. This is what has enabled us to support Virtual Machine "wrapped" containers inside of rkt for a long time.
For some more background on this article in particular. Folks in the community working at Google, CoreOS, and Red Hat have been working on a container runtime interface. This interface is helping the Kubernetes team refactor the code that currently interfaces with Docker and rkt into a cleaner subsystem. As part of that effort the team is also exposing a gRPC interface to enable various implementations and experimentation. This is the interface the team at Red Hat building cri-o (formerly OCID) is using.
What is wrong with LXC/LXD/Juju by Ubuntu?
I haven't used that latter two (I haven't taken the time to learn them) but LXC is a breeze. Set up a bridge on a host, and start creating containers! Isolated, with resource management capabilities available. LXD is the orchestration at scale component.
It's Linux specific, and it's more of a lightweight VM than a one-click-app like Docker images seem to be. Maybe that's the difference.
The Java driver does slightly less containerization, but there's also a QEMU/KVM driver if you need more isolation than containers offer.
Edit: Noticed someone mentioned LXC: initial LXC support will hopefully make it into the upcoming 0.5 release: https://github.com/hashicorp/nomad/pull/1699
(Sorry for the shill, I recently joined the Nomad team and am pretty excited!)
I don't mind putting some RDBMS that is not alpha and omega in the business in K8s but I would still exercise caution. K8s does have "PetSets" to help with this but it's still alpha quality. You can do a reasonable job of it without it by using PersistentVolumes and selectors to ensure your pod gets the volume bound dynamically. That does work very well indeed.
Most other things are fairly easy to migrate over but the hardest part -- and the bit that few people leave enough time to do well -- is integrating Kubernetes into the fabric of your business: accessing the service network, proper DNS that works across your organisation both into and out of K8s, access control, etc.
This was K8S 1.1, before DaemonSet and PetSet.
On the production machines, I ended up provisioning two different node types -- one with burst cpu shares and one with fixed cpu shares. I had to use node labelling. The web workers were sensitive to spikes in traffic and the background workers ate up all the available cpu, and shouldn't be shared with the web workers.
Still, provided you can build your artefacts -- somehow -- as a container image you can focus on building a stable infrastructure (by using something like Kubernetes) around that and let the container engine makers fight it out in the background. In theory you can swap out backends very easily with K8s so that should futureproof it.