Hacker News new | comments | show | ask | jobs | submit login
Google, Red Hat Work on a Way for Kubernetes to Run Containers Without Docker (thenewstack.io)
96 points by bdimcheff on Sept 29, 2016 | hide | past | web | favorite | 21 comments



What we collectively refer to as "containers" and software like Docker are just tools that take advantage of various facilities provided by the OS which allow one to isolate/restrict various resources like CPU, memory, network, visibility of other processes, filesystem access, fs layering and namespacing, provide traffic shaping, etc.

These isolation features go back a really long time in various OS's, but only in the last few years have they seen mass adoption, which, alas is running a bit ahead of any effort to make it a formal standard or anything even close to that.

And so the way you would create a "container" on Linux is very different from the way you'd do that on FreeBSD or Solaris or Windows. Docker was one of the first pieces of software to facilitate that on Linux (where it was and still is far from trivial), along with the whole docker image dockerhub ecosystem, etc. But ultimately what Docker does amounts to the right commands (or system calls) done, and you can have the same effect without Docker (there's even a nice Docker talk on youtube somewhere that shows how to do what Docker does without it from the command line).

IMHO in the developer community at large few people care to understand the operating system intimately, and even fewer people know stuff like cgroups or the tc command on Linux, and that's primarily why Docker is so popular - you don't need to know any of it.

I think it'd be nice if people stopped using "Docker" and "container" interchangeably, and if a set of tools emerged which would be more part of the OS distribution and less a separate concept/brand/company/whatever and was more or less uniform across, at least the Un*x-like OSs.


Obligatory link to https://github.com/p8952/bocker/blob/master/bocker, a "clone" of Docker implemented as a ~100-line shell script.


> These isolation features go back a really long time in various OS's, but only in the last few years have they seen mass adoption, which, alas is running a bit ahead of any effort to make it a formal standard or anything even close to that.

Also containers really didn't get very secure until Google started upstreaming various patches and updates to the kernel's cgroups[0].

The way docker runs containers is very different than then the way (I assume) Google does it. In a strange twist a fate Google did a lot of he grunt work to make containers safe and useable but then a another tool came to market and made containers ubiquitous.

[0] https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.t...


Linux had "containers" for years before Google's work, vis. Virtuozzo's https://en.wikipedia.org/wiki/OpenVZ.

OpenVZ's containers—and they were containers, in every sense of the word—were already secure; what they weren't was implemented by a set of granular, reusable in-kernel primitives that served any goals other than that of "containerization." Instead, OpenVZ was a very "cathedral"-esque approach to Linux containerization: just one big blob of code with a complex API surface. Thus, the kernel refused to upstream it.

Google's contribution was mainly to clone the feature-set of OpenVZ by working on a series of small enhancements (to cgroups and kernel namespaces) that would each be a useful standalone feature, but would also coincidentally be composable to replicate the power of an OpenVZ container. In other words, to create an OpenVZ alternative that was mergeable.

(Side-note: although Virtuozzo is recently a standalone company, for most of its life it was a brand owned by Parallels. If anyone is to "blame" for Linux containerization becoming a thing everyone was interested in, it's probably them.)


How about we replace every use of docker with "linux kernel namespaces and control groups along with netfilter and network bridge"? I think it has became so ubiquitous because it is so much less of a mouthful. While I don't disagree with you one bit, I doubt in practice it will ever happen.

Remember, much of the enterprise will start looking at docker maybe 2 years from now.


> a set of tools emerged which would be more part of the OS distribution and less a separate concept/brand/company/whatever and was more or less uniform across, at least the Unx-like OSs.

If we redefine "Unx-like" to "literally only linux", then we have "systemd-nspawn" which integrates well with the rest of the linux system and is a nice and standards-y looking way to run a container.


Reminds me of the Mesos Unified Containerizer. Very good stuff. The docker client is wonderful for developers, but the docker engine is the bane of operations. The bugs in it are not fun. My favorite one was in docker 1.6.0. When you did:

    docker exec -it $container bash
It caused a Nil pointer dereference and crashed the daemon. All of the other running containers then would die as a result.

Awesome.


I've a hard time positioning this along side the effort to support rkt in Kubernetes. Would OCID be necessary because there's too much Docker-only code to interface with the container runtime in Kubernetes?

I think I expected rkt to be fully OCI compatible in the future but it looks like Kubernetes itself needs to be able to interface with OCI runtimes and there's work to be done in that area? The Docker integration cuts too deep currently?


rkt can already download and run OCI (draft) container images[0].

As for Kubernetes rkt support: it is usable today and continues to develop nicely. If you want to try it out on your laptop it is super easy with minikube: https://tectonic.com/blog/minikube-and-rkt.html

Overall, the rkt support is in a good spot, works today, and the team is working on how to make all of the corners of Kubernetes work flawlessly. And it will support all of the parts of OCI as that specification matures.

If people are interested in joining in and adding a runc backend to rkt that would be great. However, all of the pieces needed to support Kubernetes is already there so we haven't seen much reason to do it. Inside of rkt there is an abstraction called a "stage1" that enables swaping out of the actual containerization system. This is what has enabled us to support Virtual Machine "wrapped" containers[1] inside of rkt for a long time.

For some more background on this article in particular. Folks in the community working at Google, CoreOS, and Red Hat have been working on a container runtime interface[2]. This interface is helping the Kubernetes team refactor the code that currently interfaces with Docker and rkt into a cleaner subsystem. As part of that effort the team is also exposing a gRPC interface to enable various implementations and experimentation. This is the interface the team at Red Hat building cri-o (formerly OCID) is using.

[0] https://groups.google.com/a/opencontainers.org/forum/#!topic...

[1] https://coreos.com/rkt/docs/latest/running-lkvm-stage1.html

[2] https://github.com/kubernetes/kubernetes/blob/master/docs/pr...


Since this has turn into a generic "What's up with containers" post:

What is wrong with LXC/LXD/Juju by Ubuntu?

I haven't used that latter two (I haven't taken the time to learn them) but LXC is a breeze. Set up a bridge on a host, and start creating containers! Isolated, with resource management capabilities available. LXD is the orchestration at scale component.

It's Linux specific, and it's more of a lightweight VM than a one-click-app like Docker images seem to be. Maybe that's the difference.


LXC is great, I agree. Some of Docker's big win is from its composability of images. If appc/rkt can do this (do they already?), I think they will capture a lot of Docker's momentum.


Nomad will run your apps in containers without docker: https://www.nomadproject.io/docs/drivers/exec.html

The Java driver does slightly less containerization, but there's also a QEMU/KVM driver if you need more isolation than containers offer.

Edit: Noticed someone mentioned LXC: initial LXC support will hopefully make it into the upcoming 0.5 release: https://github.com/hashicorp/nomad/pull/1699

(Sorry for the shill, I recently joined the Nomad team and am pretty excited!)


Is anyone here running Docker or Kubernetes in production? If so, what parts of your stack are running inside of it, versus what parts are running elsewhere in VMs or bare metal?


I've built Kubernetes architecture and a general container "mentality" to development and operations for a couple of clients and I would still not recommend you put your "pets" (relational databases and other stateful stuff) in Kubernetes just yet if it's the crown jewels in your organisation.

I don't mind putting some RDBMS that is not alpha and omega in the business in K8s but I would still exercise caution. K8s does have "PetSets" to help with this but it's still alpha quality. You can do a reasonable job of it without it by using PersistentVolumes and selectors to ensure your pod gets the volume bound dynamically. That does work very well indeed.

Most other things are fairly easy to migrate over but the hardest part -- and the bit that few people leave enough time to do well -- is integrating Kubernetes into the fabric of your business: accessing the service network, proper DNS that works across your organisation both into and out of K8s, access control, etc.


We run only application logic, proxies or cache servers (Redis/Memcached) in containers. Nothing with storage requirements.


2015: stateless Ruby on Rails web pods and sidekiq pods inside Kubernetes. Redis and Postgres provided by AWS Elasticache and Postgres. The single cluster served two customer facing apps and a single, centralized authentication for users that supports both apps.

This was K8S 1.1, before DaemonSet and PetSet.

On the production machines, I ended up provisioning two different node types -- one with burst cpu shares and one with fixed cpu shares. I had to use node labelling. The web workers were sensitive to spikes in traffic and the background workers ate up all the available cpu, and shouldn't be shared with the web workers.


Containerization gives me a lot of anxiety. When will a clear winner emerge?


Hopefully we'll rally around something like OCI as a standard. But then we all know how well standards are followed...

Still, provided you can build your artefacts -- somehow -- as a container image you can focus on building a stable infrastructure (by using something like Kubernetes) around that and let the container engine makers fight it out in the background. In theory you can swap out backends very easily with K8s so that should futureproof it.


Google should just fork Docker and make it part of Kubernetes.


Rewriting it from scratch (OCID) has less of a stigma, although it's clearly more work and less likely to be fully compatible.


The content is ripped off from http://thenewstack.io/oci-building-way-kubernetes-run-contai... Somebody please point to the original article




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: