For folks not familiar with Helm, it's basically apt-get for Kubernetes, but with the ability to deploy complex multi-tier systems. It has now graduated out of the Kubernetes incubator.
And their Workflow product (also open source), is basically the smallest piece of software that lets you run Heroku buildpacks on top of Kubernetes. So, you can get a 12-factor PaaS workflow, and still have the full Kubernetes API underneath if and when you need it.
Update: And I left out my all-time favorite piece of marketing collateral, their Children's Illustrated Guide to Kubernetes (available both as children's book and video): https://deis.com/blog/2016/kubernetes-illustrated-guide/
(Disclosure: I'm the executive director of CNCF and Gabe has been a super valuable member.)
I use Helm on a couple different projects as an integral part of CI/CD for templating config files and Ingress resources. Coupled with a good CI/CD system (I'm using GitLab), Helm templating is pretty critical in my workflow for creating demo applications (one per PR) at some specific endpoint/dns but then after integration, moving the updated code out to multiple DNS endpoints, etc.
Congrats to the team, and I look forward to seeing where Helm can go from here!
I'm right now working on porting my manifests + kubectl manual k8s workflow to helm+gitlab-ci(to automatically build and push imagines and then deploy to cluster), but I'm only just at the stage of creating the helm charts, so any literature about how to set up a ci to build the Docker images, push them, bump the chart version, deploy to staging environment for every PR and so on is something I'm very interested in.
Admittedly, I'm probably only using 50% of Helm's capabilities, because I'm primarily interested in its templating abilities rather than managing a centralized repo of apps that can be installed.
My current pattern (which I'm still iterating on) is that I have a `chart` directory which contains the Helm chart and templates, and TBH I don't really bother updating the version in `Chart.yaml` because I'm managing it in-repo with git tags, branches, etc. Then my gitlab CI scripts just `--set` a few variables so that the right docker image, DNS, and ingress path values are used for the branch.
This does mean that we're only using a small portion of Helm, and I wonder if a simpler tool wouldn't work just as well. All you need is templating, really.
I also don't understand why Helm requires "packaging" and "publishing" anything into a repo. If I commit the chart files to a Github repo, why can't Helm just go there and get them? I shouldn't need to run an HTTP server to serve some kind of special index. Git is a file server.
What I do like is the idea of Helm as a high-level deployment manager. Kubernetes deployments have nice rollback functionality, but you can't roll back multiple changes to things like configmaps and services, which I think Helm does? But Helm still wants those numbered versions...
In my opinion, Helm ought to ditch the "package management for Kubernetes" approach, and instead focus on deployment. Right now it feels like it's straddling the line between the two, and coming up short because of that. Perhaps what I want should be a separate tool.
Edit: Two more things: First, I don't like how Helm abuses the term "release". To me, and also the rest of the world, a release is a specific version of an app. You upgrade to a release, you don't upgrade a release. I think you should rename this to something like "deployment", "installation", "instance", "target" or similarly neutral.
Also, I have to say that templating ends up being a bit boilerplatey. There are things you typically want parameterized across all apps — things like the image name/tag, image pull policy, resources, ports, volumes, etc. The fact that every single chart has to invent its own parameters for this — which means your org has to find a way to standardize them across all apps — isn't so cool.
Edit: One more thing: Release names seem to be global across namespaces. That very much violates the point of namespaces. Name spaces. :-)
Sorry for the confusion!
To use a URL, I think you'd either have to push up the packaged chart as a .gzip file in your repo (which is annoying), or you'd have to package up the chart and create a GitHub release with the resulting gzip file to be able to reference it that way. In GitLab you might be able to point to a pipeline artifact after packaging the chart using GitLab CI.
In one experimental project, I'm using GitLab Pages to package and publish the index. It works out surprisingly well, but has some shortcomings.
We're looking at making it easier for people to bring-your-own-helm-chart and have GitLab deploy it. There's value in keeping the chart in the project repo, and there's value in decoupling the chart and putting in a different repo. I'm fascinated to see which becomes best practice.
I can't say I'm thrilled to put deployment-specific values in the repo. Maybe the solution is to keep the chart itself, with its defaults, in the repo, and then have a configmap or third-party resource in Kubernetes that contains the values. To deploy anything, you use the Kubernetes API to find the app's values and merge them into the template.
I wouldn't be surprised if one day Kubernetes might support that kind of templating. I also wonder what kind of system Google uses internally. They have been rather silent about offering good advice here.
I'm not planning on using all of helm either, and specifically, the whole chart repo feature is something that I don't see myself using at the moment.
The workflow I'm envisioning is one where I have a Project-A. Inside Project-A, there is a chart directory containing the project chart, so each project contains their own chart instead of having the charts in a centralised repo. The chart will be used for deployment, but something like versioning will not be bumped manually but instead be bumped by the CI after (if) tests succeeds, at which point a new docker image is built, tagged with a version-bump (or build number) and pushed, and then the chart is updated to use the new docker image and getting it's version bumped (or build number) and then at last deployed to staging environment. There should also be a next step for taking a chart deployed to staging and deploy it to production, but I'm not sure how that should be done yet.
I'm also not sure how or what the correct solution is in the case of actually updating the chart in the repo in the CI, or just supplying runtime variables (or what it's called) to helm as you deploy the chart that then overwrites the values hardcoded in the chart, but in any case, I'm envisioning a workflow where you for each PR and version-tag you make to a repo, a new docker image is produced and pushed, and a new chart is produced and deployed, so that you have a complete history of images and chart for each pr and version.
This gist of mine may help you get started with the CI portion (CI and k8s is interesting, there is a lot you need to account for as you're not given errors if a pod fails to deploy, etc). It could easily be converted to helm upgrade instead of raw kubectl - I've kept it kubectl so people know what's going on.
I'm hoping the monthly "Town-Hall" style Zoom meetings will continue, but if so I will also expect them to likely start getting much larger quickly, now with this news.
Obligatory disclaimer that I'm an engineer @ Deis working on Workflow daily.