
Creating a PostgreSQL Cluster Using Helm - charlieegan3
http://blog.kubernetes.io/2016/09/creating-postgresql-cluster-using-helm.html
======
xnxn
Kubernetes rocks, but this post exemplifies a complaint of mine: there's a
glaring lack of examples of production-capable database deployments.

In this Helm chart, your master is a single Pod (which is ephemeral and which
you should usually not be creating directly) that stores data in an emptyDir
(which is coupled to the lifecycle of the Pod).

~~~
alexk
Check this out:

[https://github.com/gravitational/stolon](https://github.com/gravitational/stolon)

this is heavily modified version of

[https://github.com/sorintlab/stolon](https://github.com/sorintlab/stolon)

K8s-native deployment of PostgreSQL

~~~
drdaeman
Thanks! That was exactly what I was looking for. From a first glance it seems
it's well possible to use it with non-k8s environments (like Rancher's Cattle)
as well, using Consul and, possibly, a bit of duct tape.

But can you please tell what's the general difference between the original and
the fork? I see that both are active, but nothing in README tells what one has
over another.

~~~
alexk
It's a bit of a failure on our (gravitational) side as we are moving fast and
haven't submitted PR yet. We will definitely try to merge upstream soon
though. We've added several features and changes to the code compared to
Simone's version:

    
    
      * S3 backup restore feature
      * RPC to communicate with controller over API
      * Refactored client and updated the CLI
      * Updated and slimmed down base images

------
smnscu
I assumed at first this was about the emacs plugin (Google seems to agree that
it's quite popular, see pic) and was a bit confused (yet intrigued!).

[http://imgur.com/VlXfKoK](http://imgur.com/VlXfKoK)

[https://github.com/emacs-helm/helm](https://github.com/emacs-helm/helm)

edit: search from incognito window (without my profile's emacs bias):
[http://imgur.com/QXkBlLq](http://imgur.com/QXkBlLq)

~~~
thesmallestcat
Same reaction: "I mean, Helm can do anything but this is getting ridiculous."

------
x0rg
We ([https://tech.zalando.com](https://tech.zalando.com)) also made some work
to use Helm to deploy Patroni
([https://github.com/zalando/patroni](https://github.com/zalando/patroni)),
our solution HA PostgreSQL. We have a PR open, if you want to have a look:
[https://github.com/kubernetes/charts/pull/57](https://github.com/kubernetes/charts/pull/57)
... comments are always helpful.

------
elktea
Is anyone else very wary about keeping things with persistent state in
containers (like databases)?

~~~
pat2man
The persistent state is really just the data, not the process. With kubernetes
you store the data on a persistent volume (which could be EBS, iSCSI, etc) and
the process runs in the container. Host dies? Kubernetes can re-attach that
volume on another host, start up a new container and you are back in business.

*Note: looks like in this example they are not setting up a persistent volume.

~~~
gtaylor
> Kubernetes can re-attach that volume on another host, start up a new
> container and you are back in business.

There are still some bugs with this, particularly on AWS. Getting better with
every release, though.

~~~
mdaniel
We just played with Elastic File System mounted into a Pod via `nfs` and it
worked like a charm, with the additional "oh, wow" of being able to attach the
same EFS to several Pods at the same time. I was also thrilled that they
mounted with the root uid intact so there wasn't any kind of dumb permission
juggling.

I did say "played with" because we haven't beat on it to know if one could run
Jira, Gitlab, Prometheus, that kind of workload. I wouldn't at this point try
Postgres but maybe it'd work.

~~~
lobster_johnson
I wonder how suitable EFS is for Postgres. It's supposedly low-latency, high-
throughput and supports Unix locking semantics and so on. On the other hand,
it's NFS (one of the worst protocols out there), and there have been reports
of less than impressive latencies. EFS is also a lot more expensive than EBS.

~~~
mdaniel
_EFS is also a lot more expensive than EBS._

That may be true, but getting a k8s cluster unwedged from EBS volume state
mismanagement is expensive, too.

What I really want is the hutzpah to run GlusterFS but I am not yet brave
enough to be in the keeping-a-production-FS-alive business.

------
jdubs
When I looked at it two months ago, I was having issues getting it to build
and deploy on my cluster. After having issues trying to dissect the simple
examples, I eventually wrote something similar with python & bash wrapped up
in a docker container.

My requirements were deploying 30 different apps using pretty much every kin
do k8s objects and deploy custom consul configuration. It could have been done
with helm, but the quick and dirty was jinja2 to the rescue.

------
tzaman
This is bad example, because it creates more questions than it answers:

\- what if the master pod dies (or I transfer it to another node pool)?

\- how do I make sure if _everything_ dies, my data is safe?

\- how do I use peristent disks in this scenario?

\- how can I have a service that handles both the master and the replicas?

\- what happens when I update the node pool to a new version of Kubernetes

\- How can I achieve true HA?

------
meddlepal
Slightly off topic, but I'm evaluating Helm for use and I'm wondering about
people's experiences? Good? Bad? Run don't walk away?

~~~
pat2man
It's still very new. Getting better quickly but I wouldn't use it for anything
too complicated.

------
kozikow
I am curious why petsets were not mentioned? I was planning to move my
postgres to petset. Do you do find running master within single pod enough?

~~~
lobster_johnson
Alpha quality, not stable, several limitations in current release. Planned for
beta in 1.5 (next year, probably).

~~~
tzaman
1.5 is scheduled for early december, according to this:
[https://github.com/kubernetes/kubernetes/milestones](https://github.com/kubernetes/kubernetes/milestones)

------
fidget
God I hate helm. Buggy piece of shit

------
Annatar
One more completely custom method of provisioning, rather than using operating
system packages and simple shell scripts inside of their preinstall,
postinstall, preremove, and postremove sections. How wonderful.

