
Ask HN: Is anyone running Kubernetes with Persistent Volumes in production? - nickjackson
If so...<p>* What storage backend and environment are you using?<p>* What is your use case for persistent volumes?<p>* How well does it perform for your needs?
======
smarterclayton
I can speak from the OpenShift perspective (which is just Kube as far as
storage is concerned):

OpenShift Online and Dedicated (both hosted Kube/OpenShift) use AWS and EBS
persistent volumes for elasticsearch and Cassandra storage, which is
moderately high IOPs although not "all things tuned for performance". Most
small non-cloud OpenShift deployments I know of are using NFS for medium /
large shared storage - file or data sharing workloads. There are several
medium sized deployments on OpenStack using Ceph under Cinder, and their
experience is roughly comparable with AWS EBS and GCE disks.

Basically, if you need to fine tune many details of the storage medium, are
carefully planning for IOPs and latency, Kube makes it slightly harder to plan
that because it's abstracting the mounting / placement decisions. It's
definitely possible, but if you're not dealing with tens of apps or more it
might be overkill.

OpenShift Online Dev Preview (the free 30-day trial env) is Kube 1.2+ and uses
the upcoming dynamic provisioning feature (which creates PV on demand) and is
used for many thousands of small ~1GB volumes. Remember though the more
volumes you mount to any node the less network bandwidth you have available to
the EBS backplane, so Kube doesn't prevent you from having to understand your
storage infra in detail.

Also, be very careful using NFS with replication controllers - the guarantee
on RCs is there is _at least_ N replicas, not at most N replicas, so you can
and will have two+ pods running talking to NFS if you have an RC of scale 1.

Edit: typos

------
lobster_johnson
It's worth warning that volumes are buggy, particularly on AWS. This one in
particular is worth keeping in mind:
[https://github.com/kubernetes/kubernetes/issues/29324](https://github.com/kubernetes/kubernetes/issues/29324).

------
hijinks
I used it with EBS volumes \- mongodb datadir and also for rabbitmq datadir \-
works wonderful. If a pod fails it detaches then comes right back up within a
few minutes.

We only have a single mongodb and rabbitmq pod since they aren't mission
critical if they go down. We had the mongodb host fail and by the time I got
paged and woke up the OK page came since kubernetes did its job and brought it
back online

