Towards the end of the article, the authors mention the open challenges.
Unbeknownst to me, I had been working on something that attempts to address some of that. It is my sixth attempt at a tool like this. It came out from when I saw how many dev environments attempting to use Docker Compose ended up with a large, ad-hoc collection of Bash scripts.
The project is called Matsuri, and it is found here: https://github.com/shopappsio/matsuri
It's intended as a framework for programmatically generating manifests and executing kubectl commands. It doesn't try to have a lot of opinions (at least, opinions that you can't change). The idea is that your platform support tool is like any other app, and should be tailored to your specific collection of apps.
It also has a notion of Apps, which are a bundle of K8S resources declared as dependencies. But that's as far as I gotten -- it has a fairly anemic convergence tool. I was more concerned about standardizing builds, pushes, updates (collection of rollouts, migrations, etc.), shell commands, "console" commands, etc. These are all achieved by expecting Apps to define callback hooks for those actions.
I don't know if this is useful or interesting to anyone else. But if anyone is interested, feel free to contact me about it.
it did not occur to me that a robust, typesafe approach when dealing with k8s configuration objects could be a good idea.
Mine's isn't typesafe. But now that I'm thinking about it, it would be an interesting approach too. In Matsuri, debug options are available to show the manifest or kubectl controls. I specifically don't use yaml or json to define the template, and instead, have it programmatically generated from Ruby directly. This let me use Ruby class inheritance and module mixins to manage everything. So no indent problems, though, I sometimes run into specs that don't validate for Kubernetes.
The rolling-update in Matsuri still uses the kubectl rolling-update under the cover. What it does do is introspect to find the current revision numbers and the current image tag (if you do not provide them).
At it's core, Matsuri does some of the things that Kubernetes doesn't do, but you still need to do. A lot of people use Bash scripts for this, and I didn't want to. For example, a single app might be coordinating among 3 replication controllers, 1 secret, and different environmental flags. You might also have different needs on dev mode (mounting source paths to the containers), staging (reduced resources, testing secrets), and production (full-blown HA, live secrets, etc). Matsuri takes advantage of certain features in the Ruby language to accomplish that. It doesn't use a template. Instead, you write Ruby code to generate the manifest.
For more technical details, check out this thread: https://groups.google.com/forum/#!topic/kubernetes-sig-confi...
I'm hoping that Red Hat's two Kubernetes based products will help kickstart adoption. Red Hat Atomic Enterprise Platform is basically hardened and supported Kubernetes (https://github.com/projectatomic/atomic-enterprise) and OpenShift v3 is a PaaS built on top of Kubernetes (https://www.openshift.org/).
Disclaimer, I work for Red Hat. But any way, Kubernetes is awesome and could change the way data centers are run.
If I have a pod of containers everything I read seems to say that containers need to be 'stateless' and if you're running kubernetes, will likely also be transient based on the system load & scale.
So if any container in a pod could go away or be spun up at any time, if it could live on any virtual machine in a cluster of physical machines...
where do you keep the physical database file so that it is accessible to all different instances? How do the multiple instances access it at the same time? Generally, how does the database layer "work" in container land?
It's the only piece that I really struggle to understand.
So when I specify the VOLUME (in a docker file/kubernetes-pod-spec) this is managed by the system.
And with respect to ensuring uninterupted service on the database layer then, how are ongoing changes synched? Or is that something I must design for by adding a queueing layer?
To make that concrete, I have one pod with MySQL and the associated VOLUME is configured on KUBERNETES instance.
I tell K to scale MySQL to 2 instances.
It points the second instance to the same physical files and starts up?
So VOLUMES live outside of KUBERNETES and do not spin up or down in a transient nature?
Can 2 instances of a database server use the same files without stepping on each other?
How does the VOLUME scale without becoming the new single point of failure?
Maybe that is the point where you use massive RAID striping and replication to scale storage? (That's far from a new concept and is a pretty stable tech).
If I'm on the right track here I might almost be ready to say I understand and trust persistence!
If you have a single node database, Kubernetes can make sure that only a single pod is running at any one time, attached to your volume. It will automatically recover if e.g. a node crashes.
If you have a distributed SQL database, Kubernetes can make sure the right volumes get attached to your pods. (The syntax is a little awkward right now, because you have to create N replication controllers, but PetSets will fix this in 1.3). Each pod will automatically be recovered by Kubernetes, but it is up to your distributed database to remain available as the pods crash and restart.
In short: Kubernetes does the right thing here, but Kubernetes can't magically make a single-machine database into a distributed one.
Now when you want clustering of things like mysql/mongo/elasticsearch/rabbitmq/etc it's a bit more complex, b/c they bring their own sharding/clustering concepts, which you have to implement on top of kubernetes. So you won't be able to simply scale mysql up via "kubectl scale rc --replicas=5", but you will have to implement a specific clustering solution, with five unique mysql-pods with their own volumes. For mysql there is "vitess" which is an attempt to build such an abstraction upon kubernetes.
If I build up a fluid infrastructure where containers get created and deleted all the time I need to make absolutely sure my persistent data is safe.
I think a big issues is most of these setups use a central storage device which for those of us coming from single server with local storage don't know much about or don't have the budget. Setting up your own EBS is hard.
Yes, searching for "ebs block device" is much more informative.
For Quobyte we added an implicit file locking mechanism that exclusively locks any file an open. This way you can protect files from corruption through unintended concurrent access from applications that are not prepared to run in this environment. This way you can also build HA mysql without using mysql's replication mechanisms but resorting to the rescheduling of the container scheduler in case of a machine failure.
This achieves that with its (afaik) novel usage of "hermetic":
The key to making this abstraction work is having a hermetic container image.
(Googling lowercase "hermetic" is nearly impossible since the uppercase meaning is much more prevalent.)
Of course, talking about "hermetic containers" makes it a sort of pun.