
Draft, Gitkube, Helm, Ksonnet, Metaparticle, Skaffold – Comparison of K8s Tools - alberteinstein
https://blog.hasura.io/draft-vs-gitkube-vs-helm-vs-ksonnet-vs-metaparticle-vs-skaffold-f5aa9561f948
======
_5meq
I'm going to use this opportunity to get up on my soap-box and talk about
helm, the "recommended" way to install kubernetes packages.

Helm is not a useful abstraction, and needs to change in a BIG way before I
consider using it over k8s_raw and ansible.

My main complaint with helm is that it doesn't allow one to develop decent
abstractions over the core kubernetes resources. IMO it needs to move away
from `gotpl` and implement the ability to use more sophisticated templating
libraries.

I think this block of text elucidates my point:

```

metadata:

    
    
      name: {{ template "drupal.fullname" . }}
    
      labels:
    
        app: {{ template "drupal.fullname" . }}
    
        chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    
        release: "{{ .Release.Name }}"
    
        heritage: "{{ .Release.Service }}"
    

```

This block of code is in every single resource in every single helm chart. And
it's one of MANY similar blocks of code that appear in basically every helm
chart. I do not particularly enjoy writing the same massive YAML file over and
over again.

Configuration management with helm is difficult. It would be nice to be able
to declare "required" fields within an application, and enforce those fields
with an error message if they are not provided. Currently, there is no way to
do this other than having a "required" template that iterates through your
"requiredFields" field and calls `{{ .Values.fieldName | required }}`

Interacting with the helm API server is difficult, which makes integrating
helm with other configuration tools difficult. Specifically Ansible. This is
also partly k8's fault ( And my fault, I suppose, because I could write it
myself... ), because port-forwarding into the k8 server isn't implemented in
any languages other than GoLang.

~~~
atombender
Agreed. Unstructured text templating to generate structures YAML is a terrible
idea, which was previously shown with Ansible and Salt. (Salt provides a way
to generate its YAML declaratively using Python, which seems to me a much
better idea. Not sure about Ansible?)

I think Jsonnet is an intriguing idea here. Jsonnet is not well known, but
apparently popular inside Google. It has variables and iteration, and allows
you to cobble together structures by merging them, iterating over them, and so
on, and the output is JSON, which happens to be what the Kubernetes API uses
anyway. (There is a system that uses Jsonnet with Kubernetes, called Ksonnet,
but that tool seems bafflingly overengineered to me.)

I've been dwelling on another idea: A system where you simply push all your
templates -- what Helms calls a chart -- to Kubernetes as a single CRD (e.g.
"kind: Template"). The templates reference variables, which you then push
separately as another CRD ("kind: Vars"). Then the final component is a
controller that listens to changes to templates and variables, and whenever
one changes, expands the templates, compares them with the current manifests,
and applies any differences.

To achieve controlled rollouts and rollbacks, you have a system on top that's
similar to Helm, but it can be completely separate. You version the
templates/vars as releases, not the underlying manifests generated by them.

~~~
wstrange
Helm Version 3 will adopt CRDs for release and application state. There will
no longer be a server side component (tiller).

It should be much easier to integrate other tools or templating systems with
V3.

[https://github.com/kubernetes-
helm/community/blob/master/hel...](https://github.com/kubernetes-
helm/community/blob/master/helm-v3/000-helm-v3.md)

~~~
lobster_johnson
Interesting, but I never saw Tiller as an issue. Quite the opposite, I want
less client and more server.

------
jitl
The breadth of different tool styles for Kubernetes is really interesting.

At Airbnb we’ve built a tool that’s most analogous to Ksonnet or Forge, but
trades a lot of flexibility/extensibility for simplicity. We have this pre-
baked model that your kube config has to fit into, and we don’t allow much
direct pass through from our config files to raw kubernetes manifest. The
upside is that limits the divergence of configuration between any two
services, which greatly reduces bus factor and improves maintain ability. I’ve
seen many a nightmare in our chef monorepo – no need to go back to ultra-
flexible (even Turing complete!) configuration.

Another key feature is that there’s no special daemon or whatever required.
Our build/deploy system runs the same commands you run on your laptop to
build/deploy development versions to your dev kube. This increases anyone’s
ability to introspect and debug the system, versus a tool with a daemon that’s
doing who-knows-what, and you struggle to get debug access to the right parts.

I hope we can open source it one day, but right now it’s under heavy
development and has lots of Airbnb-specific magicks built-in.

------
pyronicide
This article is pretty focused on the build type of k8s developer tools. There
are other types as well that can fit into specific places in your workflow.
Joe Beda does a fantastic high level overview of the different types in last
week's #tgik8s:
[https://www.youtube.com/watch?v=QW85Y0Ug3KY](https://www.youtube.com/watch?v=QW85Y0Ug3KY)
.

Here's a couple tools that try to tackle this problem in different ways:

\- Ksync - `docker run -v local:remote` for your k8s cluster
([https://github.com/vapor-ware/ksync](https://github.com/vapor-ware/ksync))

\- Telepresence - extend the cluster network locally
([https://www.telepresence.io/](https://www.telepresence.io/))

\- Forge - end to end development through deployment
([https://forge.sh/](https://forge.sh/))

~~~
alberteinstein
Yes, ksync and telepresence lets you develop on k8s cluster as if you were
doing on the local system. Haven't looked at forge yet.

These tools are really useful when you're writing code dependent on other
components or features on the cluster (like DNS). They save you a lot of port
forward, environment variable hassles. Gives high velocity development flows,
which were quite long build-push-deploy workflows earlier.

------
mmillin
One tool not mentioned in the article is Jenkins X [0]. It leverages some of
the technologies from draft to create a GitOps workflow for full build-push-
deploy with CI and CD.

[0] [http://jenkins-x.io/](http://jenkins-x.io/)

~~~
tirumaraiselvan
One thing that differentiates these tools from Jenkins x is that they have
synchronous flows rather than async webhooks model as in Jenkins x. The sync
model is more apt for immediate feedback which you need in development phase.

------
p3llin0r3
[https://keel.sh/](https://keel.sh/) is also a great release manager / chatops
framework.

edit: I had keel.io which is some random company. The site I wanted was
keel.sh

~~~
irickt
spam? Link is about portfolio analysis.

~~~
p3llin0r3
I'm not spam! Also keel is completely free and open source!

Edit: I now see what happened, fixed.

~~~
irickt
Sorry, is the link wrong?

------
matteeyah
At GitLab we're building an automated workflow that takes you from zero to k8s
called Auto DevOps.

Some of the interaction points between Auto DevOps and Kubernetes are:

\- deploying review apps

\- deploying production applications

\- supporting deployment of staging and canary environments

\- monitoring performance

You can also:

\- easily create a clusters on GKE through the GitLab UI

\- easily install "helper" applications to set up the cluster

\- easily install GitLab Runner to run Auto DevOps jobs on the cluster

You can read more about it here:
[https://docs.gitlab.com/ee/topics/autodevops/](https://docs.gitlab.com/ee/topics/autodevops/)

