Essentially I see the world broken down into four potential application types:
1) Stateless applications: trivial to scale at a click of a button with no coordination. These can take advantage of Kubernetes deployments directly and work great behind Kubernetes Services or Ingress Services.
2) Stateful applications: postgres, mysql, etc which generally exist as single processes and persist to disks. These systems generally should be pinned to a single machine and use a single Kubernetes persistent disk. These systems can be served by static configuration of pods, persistent disks, etc or utilize StatefulSets.
2) Static distributed applications: zookeeper, cassandra, etc which are hard to reconfigure at runtime but do replicate data around for data safety. These systems have configuration files that are hard to update consistently and are well-served by StatefulSets.
3) Clustered applications: etcd, redis, prometheus, vitess, rethinkdb, etc are built for dynamic reconfiguration and modern infrastructure where things are often changing. They have APIs to reconfigure members in the cluster and just need glue to be operated natively seemlessly on Kubernetes, and thus the Kubernetes Operator concept https://coreos.com/blog/introducing-operators.html
You can see more from about Operators in my short KubeCon keynote here: https://youtu.be/Uf7PiHXqmnw?t=11
Overall, great progress in Kubernetes v1.5! Great to see critical features moving from Alpha to Beta.
EDIT: I forgot that I actually have something to link to, since we open-sourced our Helm chart yesterday: https://github.com/sapcc/helm-charts/tree/master/openstack/s...
There are some other custom pieces that we built for a Kubernetized Swift. Just search for repos with "swift" in their name in the same GitHub org.
Disclosure: I work at Google on Kubernetes.
That's been out for a while now.
Note: GKE uses internal google container technologies (can you confirm?) so obviously it avoids the potential issues.
Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.
edit: As soon I posted this I remembered this is a feature; deleted original hack. :)
We chose to set a fixed upper bound to ensure outside administrators can observe the requested grace period when draining nodes or performing maintenance.
This is our Helm chart for Monasca, which among other things contains an Elasticsearch. Look for files like "*-petset.yaml" (we are on 1.4 still).
Note of warning though: We are still in the process of migrating stuff to this repo, so the charts in there can be incomplete at least for a few weeks.