Note that this change depends on the shared PID namespace support, which a larger, still-ongoing endeavour .
What does backwards compatibility mean in this context. The API?
I hope GKE is planning to migrate to a newer Docker version soon (they've been on 1.11.2 for a long time) so we can benefit from this.
I guess Craster was right after all.
Coming into the k8s ecosystem with very little container experience has been a steep learning curve, and simple, concrete suggestions like this go a LONG way to leveling it out.
Feel free to take them for a spin and feedback welcome and appreciated.
I browsed it and immediately bookmarked to have a ready "here, read this first" answer :)
Right now, we have a bunch of microservices. Most of them talk to our shared infrastructure. We started with single configuration file, which has grown to monstrous proportions, and is mounted on every pod as a config map.
What would be the correct approach? Multiple configmaps with redundant information are just as bad, if not worse.
Then ship the static list of names (should be short) and per-service credentials (highly highly recommended).
Another pattern is co-locating a proxy with your app. See e.g. linkerd on how to do that. This will also unify the handling of circuit breakers and connection pools across services - even without any shared code!
Edit: oh, you kind of do. Well, it's not upcoming any more, it's in the latest Docker CE :)
If some of you are interested in Kubernetes GPU cluster for deep learning, this article might be good to read as well.
The k8s blog has some as well:
i've seen this pattern before and it didnt make me feel very good. it reeks of unnecessary complexity.
The benefit is when a one-time tool is heavy on dependencies. For example, with OpenStack Swift (an S3-like object storage), a common one-time task is swift-ring-builder, which takes an inventory of available storage and creates a shared configuration file which describes how data is shared between storages. That's something you would run on a sysadmin's notebook, but it's included with Swift itself, so you would have to install a bunch of Python modules into a virtualenv.
In that case, it's probably easier to just use the Docker image for Swift that you have anyway, and run swift-ring-builder from there.
Kind of.. but you can set `restartPolicy: Always` and will always restart in case of failure.
If so, the hack we've applied for StatefulSets is to delete the StatefulSet with `--cascade=false`. This keeps the Pods of the StatefulSet online, but removes the StatefulSet itself. We can then deploy the StatefulSet with the new configuration vlaues, and manually delete the Pods one-by-one to have the changes be applied.
Needs Improvement: For sure
Gets the job done: Yep!
But yah like I said, not as nice as deploying a deployment.