Hacker News new | past | comments | ask | show | jobs | submit login
Draft, Gitkube, Helm, Ksonnet, Metaparticle, Skaffold – Comparison of K8s Tools (hasura.io)
79 points by alberteinstein on March 30, 2018 | hide | past | favorite | 22 comments



I'm going to use this opportunity to get up on my soap-box and talk about helm, the "recommended" way to install kubernetes packages.

Helm is not a useful abstraction, and needs to change in a BIG way before I consider using it over k8s_raw and ansible.

My main complaint with helm is that it doesn't allow one to develop decent abstractions over the core kubernetes resources. IMO it needs to move away from `gotpl` and implement the ability to use more sophisticated templating libraries.

I think this block of text elucidates my point:

```

metadata:

  name: {{ template "drupal.fullname" . }}

  labels:

    app: {{ template "drupal.fullname" . }}

    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"

    release: "{{ .Release.Name }}"

    heritage: "{{ .Release.Service }}"
```

This block of code is in every single resource in every single helm chart. And it's one of MANY similar blocks of code that appear in basically every helm chart. I do not particularly enjoy writing the same massive YAML file over and over again.

Configuration management with helm is difficult. It would be nice to be able to declare "required" fields within an application, and enforce those fields with an error message if they are not provided. Currently, there is no way to do this other than having a "required" template that iterates through your "requiredFields" field and calls `{{ .Values.fieldName | required }}`

Interacting with the helm API server is difficult, which makes integrating helm with other configuration tools difficult. Specifically Ansible. This is also partly k8's fault ( And my fault, I suppose, because I could write it myself... ), because port-forwarding into the k8 server isn't implemented in any languages other than GoLang.


Agreed. Unstructured text templating to generate structures YAML is a terrible idea, which was previously shown with Ansible and Salt. (Salt provides a way to generate its YAML declaratively using Python, which seems to me a much better idea. Not sure about Ansible?)

I think Jsonnet is an intriguing idea here. Jsonnet is not well known, but apparently popular inside Google. It has variables and iteration, and allows you to cobble together structures by merging them, iterating over them, and so on, and the output is JSON, which happens to be what the Kubernetes API uses anyway. (There is a system that uses Jsonnet with Kubernetes, called Ksonnet, but that tool seems bafflingly overengineered to me.)

I've been dwelling on another idea: A system where you simply push all your templates -- what Helms calls a chart -- to Kubernetes as a single CRD (e.g. "kind: Template"). The templates reference variables, which you then push separately as another CRD ("kind: Vars"). Then the final component is a controller that listens to changes to templates and variables, and whenever one changes, expands the templates, compares them with the current manifests, and applies any differences.

To achieve controlled rollouts and rollbacks, you have a system on top that's similar to Helm, but it can be completely separate. You version the templates/vars as releases, not the underlying manifests generated by them.


Helm Version 3 will adopt CRDs for release and application state. There will no longer be a server side component (tiller).

It should be much easier to integrate other tools or templating systems with V3.

https://github.com/kubernetes-helm/community/blob/master/hel...


Interesting, but I never saw Tiller as an issue. Quite the opposite, I want less client and more server.


I've used both ansible and salt extensively.

Ansible is similar to salt, but is imperative as opposed to declarative. I've found I prefer Ansible for the scale I work at. Salt has a better eventing and provisioning story though, and I can see the value in declaring your environment's "state" using pillars and states. There's just a lot more to Salt and I like Ansible's simplicity.

In Ansible/k8s_raw, you can declare your k8 resources in Ansible as real yaml, or you can template YAML using the jinja2 templating language, or you can generate YAML using a python library and use THAT yaml.

Ansible is a full-scale configuration management and provisioning library that happened to implement a Kubernetes module, whereas Helm is only a package manager.


Have you taken a look at some of the Helm 3 proposals? One of the points that talks about this exact use case is the concept of a "library chart", which is a way to define a block of code once and import it into another chart. In your case you could then define a block of code which will spit out a set of kubernetes labels that is shared across all your charts.

https://github.com/kubernetes-helm/community/blob/master/hel...

Your other point about it being difficult to interact with the kubernetes and Tiller APIs in any other language than Go is true, though it isn't necessarily a Helm Thing. This is a Kubernetes Thing. Tiller's going away in Helm 3 so the side effect should be that Helm will be much simpler to extend.


Check out ksonnet, it is closer to what you're looking for. (https://ksonnet.io/)


KSonnet looks great, but I think it needs to mature a bit more.

PLUS, underneath the hood of Ksonnet is a massive amount of jsonnet code that I'm not extremely familiar with, and I like to learn my tools very thoroughly before I begin using them. So it's a big task to get started with it.


I've been using it in production for close to a year and I run into the same issues. My biggest problem is really with the package manager metaphor and the fact that writing charts that can actually be installed and used in all/most use cases is really hard. To give a recent example: a chart for a log collector daemonset that allows you to pass in an external configmap name if you want to specialize the mounted .conf, but does not parameterize the 'checksum/config' annotation that is used to prompt a deployment update if the configmap changes.

Another is the inability to access cluster metadata from a template, so if something needs to know the podCidr or serviceCidr for example you have to inject that through a side channel.

I'm sure all of this will improve over time, as it has been, but right now I'm more attracted to ksonnet.


This article is pretty focused on the build type of k8s developer tools. There are other types as well that can fit into specific places in your workflow. Joe Beda does a fantastic high level overview of the different types in last week's #tgik8s: https://www.youtube.com/watch?v=QW85Y0Ug3KY .

Here's a couple tools that try to tackle this problem in different ways:

- Ksync - `docker run -v local:remote` for your k8s cluster (https://github.com/vapor-ware/ksync)

- Telepresence - extend the cluster network locally (https://www.telepresence.io/)

- Forge - end to end development through deployment (https://forge.sh/)


Yes, ksync and telepresence lets you develop on k8s cluster as if you were doing on the local system. Haven't looked at forge yet.

These tools are really useful when you're writing code dependent on other components or features on the cluster (like DNS). They save you a lot of port forward, environment variable hassles. Gives high velocity development flows, which were quite long build-push-deploy workflows earlier.


Thanks for linking to that video, do you follow any other series or blogs to learn more about K8s?


The breadth of different tool styles for Kubernetes is really interesting.

At Airbnb we’ve built a tool that’s most analogous to Ksonnet or Forge, but trades a lot of flexibility/extensibility for simplicity. We have this pre-baked model that your kube config has to fit into, and we don’t allow much direct pass through from our config files to raw kubernetes manifest. The upside is that limits the divergence of configuration between any two services, which greatly reduces bus factor and improves maintain ability. I’ve seen many a nightmare in our chef monorepo – no need to go back to ultra-flexible (even Turing complete!) configuration.

Another key feature is that there’s no special daemon or whatever required. Our build/deploy system runs the same commands you run on your laptop to build/deploy development versions to your dev kube. This increases anyone’s ability to introspect and debug the system, versus a tool with a daemon that’s doing who-knows-what, and you struggle to get debug access to the right parts.

I hope we can open source it one day, but right now it’s under heavy development and has lots of Airbnb-specific magicks built-in.


One tool not mentioned in the article is Jenkins X [0]. It leverages some of the technologies from draft to create a GitOps workflow for full build-push-deploy with CI and CD.

[0] http://jenkins-x.io/


One thing that differentiates these tools from Jenkins x is that they have synchronous flows rather than async webhooks model as in Jenkins x. The sync model is more apt for immediate feedback which you need in development phase.


https://keel.sh/ is also a great release manager / chatops framework.

edit: I had keel.io which is some random company. The site I wanted was keel.sh


> Don't do it by hand, ever

> Kubectl is the new SSH. If you are using it to update production workloads, you are doing it wrong.

I'll admit: I'm sold.



spam? Link is about portfolio analysis.


I'm not spam! Also keel is completely free and open source!

Edit: I now see what happened, fixed.


Sorry, is the link wrong?


At GitLab we're building an automated workflow that takes you from zero to k8s called Auto DevOps.

Some of the interaction points between Auto DevOps and Kubernetes are:

- deploying review apps

- deploying production applications

- supporting deployment of staging and canary environments

- monitoring performance

You can also:

- easily create a clusters on GKE through the GitLab UI

- easily install "helper" applications to set up the cluster

- easily install GitLab Runner to run Auto DevOps jobs on the cluster

You can read more about it here: https://docs.gitlab.com/ee/topics/autodevops/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: