
Helm (the Kubernetes package manager) 3.0.0 has been released - mfer
https://helm.sh/blog/helm-3-released/
======
wikibob
Helm is problematic in my view.

To new Kubernetes users, Helm appears to be the "Official" package manager,
and the "right way to do it".

It's not.

Many helm charts are out of date, have subtle bugs, and are not actively
maintained. Upstream projects often have their own way of customizing and
rendering Manifests.

The biggest flaw in Helm is the decision to use text-templating for semantic
data. This leads to ridiculous situations like having to specify the number of
spaces to indent a block, so the output YAML can be parsed properly.

Promising alternatives are:

* Jsonnet [0]

See kube-prometheus [1] for a complex project that uses Jsonnet to accomplish
things that would be much more difficult using pre-defined templating.

* Cue [2]

from the creator of BCL/GCL (Borg/Google Config Lang), which is what Jsonnet
is essentially. He's written quite a bit about Cue and how he intends it to
address the shortcomings of Jsonnet.

[0]
[https://jsonnet.org/articles/kubernetes.html](https://jsonnet.org/articles/kubernetes.html)
[1] [https://github.com/coreos/kube-
prometheus](https://github.com/coreos/kube-prometheus) [2]
[https://cuelang.org/docs/about/](https://cuelang.org/docs/about/) [3]

~~~
mfer
Two things to consider...

First, there has been a lot of talk about how much work it is to onboard to
Kubernetes. There have even been recent discussions in Kubernetes SIGs. In UX
terms, Kubernetes has a high barrier to entry.

I realize that many people who use and know Kubernetes like it. We folks are a
group that has gotten past the high barrier to entry. Most have not.

Second, Helm is a package manager. It's like apt, yum, homebrew, and others.
It lets someone who knows how to run an application (ops business logic) on a
platform (like debian or k8s) and package it up so someone else who doesn't
know this can easily run an application. This is why so many use brew and apt
to install something rather than install everything by hand.

Something like jsonnet doesn't solve for these situations.

I know some folks don't like text-templating. I'm not a big fan of go
templating. But, templating is something that people are used to and generally
comfortable with. This means it has a fairly low barrier to entry.

Notice, a lot of the reasons for why Helm does things is to reduce friction
for a large audience. To reduce the barrier to entry.

~~~
bshipp
This resonated with my recent experience. I have been using Docker and docker-
compose quite comfortably for a few years now. I thought it'd try and dip toes
into Kubernetes to extend my containerization skill-set and learn something
new.

I ended up shelving it until I have time to take a proper course and dedicate
the time to it. Docker compose was very easy to learn for me, with a few easy
early "wins" and reasonably comprehensive and helpful documentation when I
wanted to expand into new areas. For some reason, I really struggled grokking
how to structure a k8 chart. Maybe I'll try again after a good night's sleep.

~~~
curryst
Have you tried plain manifests? Unless you need to support a lot of
permutations (because of many environments, or you're distributing it and need
to support many deployment options), Charts are generally a suboptimal way to
get things running in k8s.

The major "wins" for using charts are usually around being able to use the
same set of manifests with different values interpolated in. Which in turn, is
really only valuable if you're supporting a large number of permutations that
need to be kept in sync.

~~~
bshipp
Thanks very much for the suggestion! I really didn't know where to start and
the first few sites seemed to dive right into charts but I'll check out
manifests this time. Thanks again.

------
rcconf
Helm was great when I first started to learn Kubernetes, but overall has been
extremely annoying and I would not recommend it for anything. I personally no
longer use it on any cluster and if a specific piece of software in a cluster
is on Helm, I immediately feel dread since I have no idea how that chart is
going to behave on an upgrade or a change

A recent example of Helm being a problem for me is their Elasticsearch chart.
I tried to shutdown my data nodes to increase disk space on all of them, but I
noticed after scaling down, the first instance was taking a _very_ long time
to shutdown.

Turns out, there's a default lifecycle hook (pre-stop.sh) that drains the node
and rebalances the shards to the other nodes as a pre-stop hook. (to me, this
is a bad idea, the cluster operator should control draining and rebalancing
the nodes.)

Guess what, there's no way to comment out the draining functionality since the
shell script is mounted as a read-only ConfigMap. Annoyingly, the only
solution was to disable re-balancing across the entire cluster, and then
jumping on each node and run $ kill on the shell script for each node.

I've ran into strange defaults like that multiple times. I've ran into images
randomly being changed, charts being upgraded but breaking your live
deployment. Overall just a nightmare.

Kubernetes is actually quite understandable and consistent. Helm adds a layer
of complexity that makes you scared to upgrade or change your cluster. If you
stick to Kubernetes primitives like Deployments and StatefulSets, it's super
easy to know what changes are going to occur in your cluster.

------
cfors
This is a big step forward for Helm. Tiller has been a major pain point in the
past, although I understand the design decisions why it was placed here in the
past.

> In Helm 3, we now use a three-way strategic merge patch. Helm considers the
> old manifest, its live state, and the new manifest when generating a patch.

I will have to test this out, but honestly I think that any sort of
"strategic" merge is probably the wrong approach to take when dealing with so
called "idempotent" infrastructure.

~~~
ses1984
Strategy is how you deal with idempotency and consistency in a distributed
environment.

Your manifest is idempotent but isn't applied instantly.

You can apply a manifest, and get into some broken state, how do you recover?
What if you try to roll back and things get worse?

~~~
cfors
I suppose if you're not using git-ops related deployments, but Git has been
proven to work well for this exact process. If you are tracking your releases
as part of a Git commit, then you rollback with that.

Doesn't seem that complicated, but maybe I'm missing some benefits of this
strategy that others with more experience managing K8s resources have had.

Edit: Retroactively speaking you also don't need to leverage etcd to store n*x
your K8s resources in a secrets file where n is the potential rollback number,
it's all stored in a Git repository.

------
janpot
String templating of yaml files. Everything is global scope. No sane way of
reusing parts of a template.

We use helm, but for the life of me, there are some pretty poor design choices
in there.

~~~
mfer
You can reuse parts of templates. One method is to create templates that are
not automatically turned into YAML files and reuse them as needed. There are
docs on it at
[https://helm.sh/docs/topics/chart_template_guide/named_templ...](https://helm.sh/docs/topics/chart_template_guide/named_templates/#declaring-
and-using-templates-with-define-and-template)

~~~
janpot
ah yes, this technique, with its lack of template inputs and way of (not)
handling indentation is exactly what I mean.

~~~
mfer
I'm sorry for the confusion. Go templates have piping which is how this is
expected to be handled. There are functions of nindent and indent that handle
the indentation. Inputs can be passed in. In fact, in the chart created using
`helm create` this happen. For example, if you do `helm create foo` you can
find snippets that read `{{- include "foo.labels" . | nindent 4 }}`. This
includes a template, passes in `.` (the scope of variables for that template),
and passes the output to `nindent` to get the needed indentation.

This is just an example and it's using go templates way of doing things.

~~~
mamon
And that's exactly the problem. With better templating language you shouldn't
need this "| nindent 4" expression at all.

~~~
mfer
I see, the problem is with the Go templating syntax.

So, more context...

Helm was designed to support more than one templating system. The problem was
that the majority of people were ok with Go templates so no one contributed
support for other systems.

------
tlynchpin
Many projects don't even need cluster state, they could have just used a
Makefile and some sed templating [0] (get off my lawn). Thankfully in these
cases you can use helm offline to render the templates then just apply them
but it's not great.

Thus I'm sour on Helm, I'm rooting for Kustomize. [1]

[0]
[https://github.com/kubernetes/kubernetes/blob/master/cluster...](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-
dns/Makefile)

[1] [http://kustomize.io/](http://kustomize.io/)

~~~
gregwebs
kustomize only covers some simple (but common) use cases. If you can use it,
you should since its modifications are semantic. Generally I find it is good
for handling known differences between a few environments like local and
production. But it isn't capable of replacing helm.

------
jonbronson
The Helm team did a great job addressing community concerns. I love to see
this type of engagement and commitment to users.

------
SrslyJosh
> After hearing how community members were using Helm in certain scenarios, we
> found that Tiller’s release management system did not need to rely upon an
> in-cluster operator to maintain state or act as a central hub for Helm
> release information. Instead, we could simply fetch information from the
> Kubernetes API server, render the Charts client-side, and store a record of
> the installation in Kubernetes.

Oh, so now you're doing the _obvious_ and _simple_ thing? FFS...

~~~
bacongobbler
Helm Classic was introduced at the first KubeCon... Wayyy back in November
2015. Kubernetes 1.1.1 was released earlier that month, and 1.0.0 shipped only
4 months prior to that in July.

Back then, Kubernetes had no concept of a ConfigMap. ReplicationControllers
were all the hype (remember those?). The Kubernetes API was changing rapidly.
When Helm 2 was being built, we needed an abstraction layer from the
Kubernetes API to allow ourselves some room to guarantee backwards
compatibility. Tiller was created as that abstraction layer. It provided us
with a layer where we could control the input (via gRPC), the output (also via
gRPC), and provide some backwards compatibility guarantees to users. We're
pretty proud of the fact that Helm has maintained a strong commitment to
backwards compatibility since 2.0.0.

Over time, Kubernetes' API layer has become more stable. Helm 3 is our
opportunity to refactor out some of those protective layers we put in 4 years
ago. Tiller being one of them.

Hope this helps provide some context.

------
gregwebs
My wish for 4.0 is to decouple the pieces of helm. It has always done too many
things. Package management, templating, deployment life cycle management.

Helm seems good enough as a package manager. But it forces me to use one
particular text templating system. Why not allow a package to generate
manifests with any system that it likes? This can be done reproducibly with a
docker image in the k8s cluster (generate the manifests and send the manifests
back to helm to apply them).

Deployment lifecycle management is complicated, and I can see why it is being
coupled to templating values, but I don't think that must be the case.

It is great to see the 3.0 release just getting rid of Tiller, that's a useful
step in this direction.

------
15155
What happened to Lua plugin support that was discussed early on for v3?

~~~
leg100
Looks as though they'll tack it on later:

[https://github.com/helm/helm/issues/5084](https://github.com/helm/helm/issues/5084)

Does beg the question what have they been doing all this time. V3 has been a
long time in the making.

~~~
bacongobbler
Removing a significant piece of the architecture was a large undertaking.
There's many parts of the system that required a redesign. Throwing on yet
another major piece of work (including developing a new Lua VM from the ground
up) would've delayed the release well into next year. Many users just wanted
Helm without Tiller.

To provide some perspective, over 80,000 lines of code was changed between
2.16.1 and 3.0.0, including test infrastructure, the architecture,
documentation...

We started development on Helm 3 last year in March, shortly after the first
Helm Summit in Portland. A complete redesign of the architecture in 20 months
time seems pretty par for the course for a project of this size.

------
techntoke
Hopefully this will help gather more adoption.

~~~
whalesalad
Helm has always confused me. It’s an abstraction that seems to make things
better and worse at the same time.

~~~
cfors
It sure is a leaky abstraction [0]. I would argue that the benefit of it is
when spinning up your first Kubernetes cluster, the many community charts
created by default can be installed easily with a > helm install stable/mysql.

However, it fails when the reality of running a service in production hits and
you need to dive into changing a Chart template, requiring knowledge of not
just the Kube API but also more often than not ugly template-fu to get the
changes that you want reflected.

I really just recommend using raw K8s resource files for almost everything
unless you are managing hundreds of similar releases.

[https://www.joelonsoftware.com/2002/11/11/the-law-of-
leaky-a...](https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-
abstractions/)

~~~
whalesalad
Agree. We built a management layer to take a DSL that was essentially kube
yaml without the boilerplate required for common stuff and piped it on
through.

If you’re using Kube you need to understand Kube. Conveniences are great to
accelerate common tasks but as you say at some point ya gotta get into the
guts of it and tweak things.

