Hacker News new | comments | show | ask | jobs | submit login
StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes (kubernetes.io)
72 points by TheIronYuppie on Dec 20, 2016 | hide | past | web | favorite | 27 comments



Great to see! StatefulSet's are an example of how Kubernetes is enabling applications of all types to run on cluster. This doesn't mean the user can't take zero action to port to Kubernetes, the application needs to be running in a container of course, but it is proof that an application doesn't need to be "12 Factor" to run on Kubernetes.

Essentially I see the world broken down into four potential application types:

1) Stateless applications: trivial to scale at a click of a button with no coordination. These can take advantage of Kubernetes deployments directly and work great behind Kubernetes Services or Ingress Services.

2) Stateful applications: postgres, mysql, etc which generally exist as single processes and persist to disks. These systems generally should be pinned to a single machine and use a single Kubernetes persistent disk. These systems can be served by static configuration of pods, persistent disks, etc or utilize StatefulSets.

2) Static distributed applications: zookeeper, cassandra, etc which are hard to reconfigure at runtime but do replicate data around for data safety. These systems have configuration files that are hard to update consistently and are well-served by StatefulSets.

3) Clustered applications: etcd, redis, prometheus, vitess, rethinkdb, etc are built for dynamic reconfiguration and modern infrastructure where things are often changing. They have APIs to reconfigure members in the cluster and just need glue to be operated natively seemlessly on Kubernetes, and thus the Kubernetes Operator concept https://coreos.com/blog/introducing-operators.html

You can see more from about Operators in my short KubeCon keynote here: https://youtu.be/Uf7PiHXqmnw?t=11

Overall, great progress in Kubernetes v1.5! Great to see critical features moving from Alpha to Beta.


Totally agreed. My current work assignment is to put OpenStack Swift (a distributed object storage) in Kubernetes, so exactly the kind of application that you would expect to catch fire in a cluster. The experience has been very pleasant so far. Maybe I'll speak about it at Kubecon Europe in March (if the organizers like my abstract).

EDIT: I forgot that I actually have something to link to, since we open-sourced our Helm chart yesterday: https://github.com/sapcc/helm-charts/tree/master/openstack/s...

There are some other custom pieces that we built for a Kubernetized Swift. Just search for repos with "swift" in their name in the same GitHub org.


Brandon's talk was awesome - highly recommended!

Disclosure: I work at Google on Kubernetes.


You should remove zookeeper from 2 and move it to 3.

https://zookeeper.apache.org/doc/trunk/zookeeperReconfig.htm...

That's been out for a while now.


Neat. Noted. Although, I can't edit the comment anymore.


4) Production applications: Don't trust docker and kubernetes to work flawlessly. Ain't running on that.


Can you say more? I'm happy to help wherever possible.

Disclosure: I work at Google on Kubernetes.


Short version: Docker is unstable. Whatever is built on top of that is rotten.

Note: GKE uses internal google container technologies (can you confirm?) so obviously it avoids the potential issues.


Yo, make node-local a first class citizen already... This is an area where mesos is a far more suitable choice right now for stateful systems.


What is node-local?


I believe krenoten is referring to something along the lines of https://github.com/kubernetes/kubernetes/issues/7562


And until that lands you can use daemonsets and tag the machines you are treating as pets (if the data is tied to a machine, you need to bring the machine back or lose the data). Agree that local PV is very important.


Not familiar with the specific terminology of Kubernetes (pets, sets, etc) but does this allow you configure the shutdown behavior of containers? There's a Mesos feature I saw discussed where you can tell it to ping a endpoint in your application and not terminate the container until it says it is OK / safe to do so - is the same possible in Kubernetes? Want to do a blue-green type deployment pattern but the old application containers cannot be shutdown until all sessions are terminated (maybe even up to 15 minutes).


Set this to 900:

terminationGracePeriodSeconds

Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.


See Pod lifecycle and handler:

http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_...

http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_...

edit: As soon I posted this I remembered this is a feature; deleted original hack. :)


You can use a combination of these to set a long grace period, and then have your pre-stop hook wait for your process to finish (whatever criteria you define as finished).

We chose to set a fixed upper bound to ensure outside administrators can observe the requested grace period when draining nodes or performing maintenance.


Is AWS baked yet. In the past the EBS mounts failed to mount. @justinsb i look for you.


We've definitely had our share of issues in this area, but we also take them seriously as you can see by the volume of patches that have gone in and continue to go in! With the very latest releases (1.4.7 and 1.5.1) I think we're in reasonable shape on volume mounting. And if you find things - particularly with the newer versions - please report them and we will work to fix them. We found and fixed another edge case recently, which was driven by an unusual real-world scenario (Xen not detaching volumes despite AWS believing them to be detached), and mitigated that. I'm sure we'll continue to find edge cases (and probably a few silly bugs as well), but we're also working on enhanced testing to really stress this area of the code so we can find them faster.


Are there examples on how to run something like an elasticsearch or redis cluster in a statefulset ? I think you need to use persistentvolumeclaim,etc..But any actual examples are hard to come by.


Here you go: https://github.com/sapcc/helm-charts/tree/master/openstack/m...

This is our Helm chart for Monasca, which among other things contains an Elasticsearch. Look for files like "*-petset.yaml" (we are on 1.4 still).

Note of warning though: We are still in the process of migrating stuff to this repo, so the charts in there can be incomplete at least for a few weeks.


Hey that's awesome. Will keep track of this.


Hey, I'm using quite a few StatefulSets in prod right now, they are working awesome but are there plans to add autoscaling to them?


That's awesome! Do you mean autoscaling the pods or nodes?

Disclosure: I work at Google on Kubernetes.


Both would be awesome, I am looking at building a stateful set scaling service for my company right now. But before I do that I wanted to make sure it wasn't being released soon. Thanks


So I'm trying to dockerize a legacy PHP app that likes to use a config.php file. Would this be of help?



Alternatively you can also just use $_ENV['APP_XYZ'] instead of the actual value in your config.php and then use environment variable way to get the container/pod configured.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: