Hacker News new | comments | show | ask | jobs | submit login

Congrats to Gabe and the whole Deis team on the acquisition.

For folks not familiar with Helm, it's basically apt-get for Kubernetes, but with the ability to deploy complex multi-tier systems. It has now graduated out of the Kubernetes incubator.

And their Workflow product (also open source), is basically the smallest piece of software that lets you run Heroku buildpacks on top of Kubernetes. So, you can get a 12-factor PaaS workflow, and still have the full Kubernetes API underneath if and when you need it.

Update: And I left out my all-time favorite piece of marketing collateral, their Children's Illustrated Guide to Kubernetes (available both as children's book and video): https://deis.com/blog/2016/kubernetes-illustrated-guide/

(Disclosure: I'm the executive director of CNCF and Gabe has been a super valuable member.)




I think Helm is sold short when it's described as apt-get for kubernetes. I think it's probably substantially closer to chef/salt/puppet for k8s, with a little bit of apt-get in the form of service dependency management. Pulling down the binaries is all handled by docker, but reusing and retooling config files is where Helm's real strength is, IMO.

I use Helm on a couple different projects as an integral part of CI/CD for templating config files and Ingress resources. Coupled with a good CI/CD system (I'm using GitLab), Helm templating is pretty critical in my workflow for creating demo applications (one per PR) at some specific endpoint/dns but then after integration, moving the updated code out to multiple DNS endpoints, etc.

Congrats to the team, and I look forward to seeing where Helm can go from here!


Have you written about your setup anywhere?

I'm right now working on porting my manifests + kubectl manual k8s workflow to helm+gitlab-ci(to automatically build and push imagines and then deploy to cluster), but I'm only just at the stage of creating the helm charts, so any literature about how to set up a ci to build the Docker images, push them, bump the chart version, deploy to staging environment for every PR and so on is something I'm very interested in.


We use Helm together with Gitlab's CI as well. To make that process smoother, we built landscaper [1]. It receives a desired state, which is a bunch of yamls. Each yaml contains a chart-reference and values (settings). When landscaper is run during CI, it obtains the actual state (the releases) and either updates, creates or deletes to obtain the desired state. The desired state is under version control, so lately I rarely do any manual k8s work. Creating or updating a deployment is a matter of a pull request.

[1]: https://github.com/Eneco/landscaper


This is great, we'll discuss integrating this with GitLab https://gitlab.com/gitlab-org/gitlab-ce/issues/30748


Well this is awesome. Thanks for posting.


I haven't, but I've thrown together a gist [1] of the most-relevant configuration. In this case it's a pretty simple static page hosted on an nginx container, but the approach should scale to multiple services if I ever started using proper Helm dependency management. If you have any questions let me know, and I can also comment the gist a bit more as I get time today.

Admittedly, I'm probably only using 50% of Helm's capabilities, because I'm primarily interested in its templating abilities rather than managing a centralized repo of apps that can be installed.

My current pattern (which I'm still iterating on) is that I have a `chart` directory which contains the Helm chart and templates, and TBH I don't really bother updating the version in `Chart.yaml` because I'm managing it in-repo with git tags, branches, etc. Then my gitlab CI scripts just `--set` a few variables so that the right docker image, DNS, and ingress path values are used for the branch.

[1] https://gist.github.com/andrewstuart/8006a6f39ce5cb3fff7211e...


This is exactly the pattern that we are experimenting with. Glad to see that we're not alone in thinking that this is much better than the centralized chart repo way.

This does mean that we're only using a small portion of Helm, and I wonder if a simpler tool wouldn't work just as well. All you need is templating, really.


Templating is a good start, here are a few other things of value that Helm provides in this context. Release management, upgrade/rollback. Dependency management and the ability to create common charts and byo containers, the testing framework to test deployments on Kubernetes. I recently did a session at KubeCon about all these uses cases at KubeCon if you're interested in taking a look. https://youtu.be/cZ1S2Gp47ng Happy to chat more if you're interested.


Release management is desirable, but Helm's version-number-oriented release management is a bad match for Kubernetes and git. We don't deal with versions; we use commit IDs, and we use Github.

I also don't understand why Helm requires "packaging" and "publishing" anything into a repo. If I commit the chart files to a Github repo, why can't Helm just go there and get them? I shouldn't need to run an HTTP server to serve some kind of special index. Git is a file server.

What I do like is the idea of Helm as a high-level deployment manager. Kubernetes deployments have nice rollback functionality, but you can't roll back multiple changes to things like configmaps and services, which I think Helm does? But Helm still wants those numbered versions...

In my opinion, Helm ought to ditch the "package management for Kubernetes" approach, and instead focus on deployment. Right now it feels like it's straddling the line between the two, and coming up short because of that. Perhaps what I want should be a separate tool.

Edit: Two more things: First, I don't like how Helm abuses the term "release". To me, and also the rest of the world, a release is a specific version of an app. You upgrade to a release, you don't upgrade a release. I think you should rename this to something like "deployment", "installation", "instance", "target" or similarly neutral.

Also, I have to say that templating ends up being a bit boilerplatey. There are things you typically want parameterized across all apps — things like the image name/tag, image pull policy, resources, ports, volumes, etc. The fact that every single chart has to invent its own parameters for this — which means your org has to find a way to standardize them across all apps — isn't so cool.

Edit: One more thing: Release names seem to be global across namespaces. That very much violates the point of namespaces. Name spaces. :-)


You don't have to publish the chart. Installing from a directory in a GitHub repo works fine.


But then you don't get dependency tracking, and you also have to clone the repo to install it, instead of something like "helm install github.com/foo/bar#e367f6d".


Really? I started playing around with and it not clear how to do this with helm.


Ah, I think I understand the confusion now. I guess you're wanting to do something like `helm install https://github.com/some/url` and that doesn't work. But I was assuming you were consuming the chart in the same repo, or including it via a submodule, such that the chart would be a local file reference.

Sorry for the confusion!

To use a URL, I think you'd either have to push up the packaged chart as a .gzip file in your repo (which is annoying), or you'd have to package up the chart and create a GitHub release with the resulting gzip file to be able to reference it that way. In GitLab you might be able to point to a pipeline artifact[0] after packaging the chart using GitLab CI.

In one experimental project[1], I'm using GitLab Pages to package and publish the index. It works out surprisingly well, but has some shortcomings.

[0]: https://docs.gitlab.com/ce/user/project/pipelines/job_artifa... [1]: https://gitlab.com/charts/charts.gitlab.io


`helm install path/to/chart/directory`


You're not alone and it's great to see others thinking this way.

We're looking at making it easier for people to bring-your-own-helm-chart[1] and have GitLab deploy it. There's value in keeping the chart in the project repo, and there's value in decoupling the chart and putting in a different repo. I'm fascinated to see which becomes best practice.

[1]: https://gitlab.com/gitlab-org/gitlab-ce/issues/29969


We considered keeping all the charts together in one repo. But it's just simpler for developers.

I can't say I'm thrilled to put deployment-specific values in the repo. Maybe the solution is to keep the chart itself, with its defaults, in the repo, and then have a configmap or third-party resource in Kubernetes that contains the values. To deploy anything, you use the Kubernetes API to find the app's values and merge them into the template.

I wouldn't be surprised if one day Kubernetes might support that kind of templating. I also wonder what kind of system Google uses internally. They have been rather silent about offering good advice here.


I hope eventually you can use environment-specific variables[1] in GitLab to manage your deploy variables. [1]: https://gitlab.com/gitlab-org/gitlab-ce/issues/20367


Awesome! I will be reading through the gist later today when I get back to working on it.

I'm not planning on using all of helm either, and specifically, the whole chart repo feature is something that I don't see myself using at the moment.

The workflow I'm envisioning is one where I have a Project-A. Inside Project-A, there is a chart directory containing the project chart, so each project contains their own chart instead of having the charts in a centralised repo. The chart will be used for deployment, but something like versioning will not be bumped manually but instead be bumped by the CI after (if) tests succeeds, at which point a new docker image is built, tagged with a version-bump (or build number) and pushed, and then the chart is updated to use the new docker image and getting it's version bumped (or build number) and then at last deployed to staging environment. There should also be a next step for taking a chart deployed to staging and deploy it to production, but I'm not sure how that should be done yet.

I'm also not sure how or what the correct solution is in the case of actually updating the chart in the repo in the CI, or just supplying runtime variables (or what it's called) to helm as you deploy the chart that then overwrites the values hardcoded in the chart, but in any case, I'm envisioning a workflow where you for each PR and version-tag you make to a repo, a new docker image is produced and pushed, and a new chart is produced and deployed, so that you have a complete history of images and chart for each pr and version.


Yeah, I'd think you'd want to take advantage of specifying the image tag as a variable rather than committing the image tag back to the repo. Kind of defeats the point of helm to hard code everything. :) Only bump the chart version when the configure itself (outside of the image tag) changes.


Ah, yes, that makes sense!


Skimming through the comments here so I apologize if you are already aware of this, but please feel free to drop by in #helm on the Kubernetes Slack channel with any questions there. :)


I've been meaning to make a blog post about doing this on k8s.camp. I'll try to get to it this week/weekend and I'll let you know. I use helm to deploy several hundred services and microservices (stateless, stateful, etc).

This gist of mine may help you get started with the CI portion (CI and k8s is interesting, there is a lot you need to account for as you're not given errors if a pod fails to deploy, etc). It could easily be converted to helm upgrade instead of raw kubectl - I've kept it kubectl so people know what's going on.

https://gist.github.com/mikejk8s/0f805c3e7d0704cbea63db846a0...


Here's something I've put together. Hopefully it helps. https://youtu.be/NVoln4HdZOY


Ha, I just watched that. Someone posted it to me on the #helm channel on kubernetes slack just after I had watched your kubecon talk. Needless to say, I've been watching your croc-hunter pipeline doing it's thing a few times now


How does it compare to DC/OS on Mesos? Interested in hearing from people experienced in both.


The Children's Illustrated Guide to Kubernetes is pretty to look at, but it's terrible as a guide. I previously commented as to why: https://news.ycombinator.com/item?id=11927711


So Deis PaaS is dead. Is this now a AquireHire to get Kubernetes to Azure?

https://deis.com/blog/2017/deis-paas-v1-takes-a-bow/


Brendan Burns (k8s co-founder) joined Microsoft last July, so this is certainly not their first move in the Kubernetes area.


Definitely not dead. Actively developed and worked on. The release schedule has continued with an unbroken chain of monthly stable releases, while v1 platform was sunset with loads of advance warning time.

I'm hoping the monthly "Town-Hall" style Zoom meetings will continue, but if so I will also expect them to likely start getting much larger quickly, now with this news.


Support for v1 was dropped earlier this year. v2 is still actively developed and worked on. It's even mentioned in the blog post.

Obligatory disclaimer that I'm an engineer @ Deis working on Workflow daily.


Oh wow, the CIGtK is really awesome.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: