Hacker News new | past | comments | ask | show | jobs | submit login
Tanka: Our way of deploying to Kubernetes (grafana.com)
223 points by robfig 17 days ago | hide | past | web | favorite | 119 comments



In our experience running production workloads on k8s for over three years templating (helm) and structured editing approaches both have their place, and both are valuable. We don't feel the need to replace declarative approaches with another imperative language, or to use complicated helm charts for straightforward service deployments.

There are many ways to classify workloads, but one big distinction that we find valuable is between stable infrastructure components and our own rapidly deployed services. The former have complicated configuration surfaces but change relatively rarely, while the latter are usually much simpler (because microservices) and change daily in many cases.

We find helm works very well for the infrastructure pieces. Yes it's complicated and easy to get wrong, but so are most other package management systems. Charts for complicated things can be quite dense and hard to comprehend. See the stable prometheus chart for an excellent example. But once the work is done and as long as there is commitment to maintain it the results can be pretty awesome. We use helm charts to install and upgrade prometheus, fluentd, elasticsearch, nginx, etcd and a ton of other tools. Yes we've had to fork some charts because the configuration surface wasn't fully exposed, but they are a minority.

For our own services charts are overkill. They're hard to read, and crufted up with control structures and substitution vars. Essentially all of our microservices are deployments with simple ingress requirements. We currently use kustomize to process base manifests that live in the service repos and combine them with environment-specific patches from config repos. Both are just straight yaml and very easy for backend developers to read and understand, and different groups (i.e. devops, sre, backend dev) can contribute patches to the config repos to control how the services are deployed.

Bottom line: if you're going all-in on kubernetes, which you really need to do to get the most benefit from it, then you're going to need more than one approach to deploying workloads.


Totally agree. Helm is difficult enough to get up to speed with for engineers without prior experience with Kubernetes. Anything more complicated (and ksonnet was significantly more complicated, which is one of the reasons why it died out) is just not scalable. Helm is good enough and brings a decent balance between usability and complexity.


For anyone curious, this is what the prometheus chart look like (I think):

https://github.com/helm/charts/blob/master/stable/prometheus...


That's the values file. The actual manifest templates are just as horrifyingly impressive :).

https://github.com/helm/charts/tree/master/stable/prometheus...


One of the main problems with helm IMO is that it effectively turns the values.yaml file in a structure that almost mirrors the manifests themselves, and the biggest problem of that is hidden in the word "almost": there are many subtle ways this mapping can be done, including subtle or not so subtle field naming/spelling differences.


Yeah absolutely, the multiple layers of indirection you can end up with are frustrating for back end engineers trying to understand how a thing got deployed the way it did. For our stuff just patching the environment in as a yaml block is way more comprehensible. But then when you want to let people one-line a complete elasticsearch cluster install helm has that power if you're willing to commit to it.


I found my personal sweet spot in kubecfg, which allows me to override the fields I need (without having to know which one advance) and when needed it can do more advanced stuff such as variables, expressions, loops, etc which allow one-line complete elastic search cluster-like scenarios


I spent about 6 months using helm and made around 20+ charts for the services.

In the end we got rid of it and replaced it with Terraform. If your infrastructure is 100% kubernetes then I think helm is great. Our infrastructure is not. We have databases, dns, buckets, service accounts and more so we were splitting our setup between terraform and helm. Passing data between the two tools was going to be a pain. We follow a layered approach of building up the infrastructure.

1) Networking: DNS 2) secrets, service accounts, buckets 3) DBs 4) Pre-application config (istio) 5) Services

Semi-related things are together and all of those cloud provider values we need are saved as secrets. We are on GCP so that means we need things like service accounts to access GCP resources (buckets, cloudsql) and all of those variables are available to our services to pick up.

And Terraform has STATE. This is unbelievably valuable when doing continuous delivery as you can tell what changed on every deploy and deploys are FAST. One thing that really bugged me about helm was that determining if a deploy failed was a post helm event. We were going to have to write monitoring for service health/uptime on deploy. This is not hard at all but you get it for free with terraform. If a service failed to start, terraform will throw an error...

I don't think people know that Terraform has a kubernetes provider. It does not support all the alpha objects but has decent support for 99% of the things you need. I wish someone made a provider for istio virtual services and service entries.


Why don't more people use Terraform? I think Terraform is amazing. I wanted to make containerpen.io for docker-compose + dockerfiles + terraform files to compete with codepen.io


Because the DSL is complicated and you encounter weird edge cases even for not so complicated things.

Common, almost ubiquitous in my circles, and often reliable are things I would say about it, painless is not. I was a bigger fan until using it daily focused on a project for a month on it more than anything else.

I feel this strongly enough I'm looking at AWS CDK and troposphere, likely to learn I suck at writing my own imperative style terraform and to be more thankful. Lol


My experience with Terraform was suboptimal. At least with GCS, it would do weird things and mess up permissions.

I found the documentation and it’s special syntax a pain. Just give me json with json schemas defines somewhere so vscode can give me great code completion and I can templatize as necessary


> Just give me json with json schemas defines somewhere so vscode can give me great code completion and I can templatize as necessary

Is there something that does this in the wild?


We bootstrap tf with a short go script then use terraform for until the cluster runs. Then argocd takes over.

I really don't like how TF holds it state and it is errorpron. Allone how you have tto keep tf version in sync feels wrong.

The k8s support did not take us far. We had some kubectl script and it doesn't feel like a first class citizen. I wouldn't bet on tf for k8s.

I do love argocd. Great product.


I do find that state management is a bit of a pain which is why I have the layered approach listed above. By keeping components/modules separate I am able to update things without worrying about breaking others. Each namespace has it's own state while the kube cluster has it's own state too.

I find state to be invaluable when a deploy of anything fails.


Curious about what is deploying your services, are they still containerized or are you using VMs? Any ansible along with it?


All of our services are containerized and running in kubernetes.


Very interesting. I've been reluctant to adopt Helm for Kubernetes resource management because of a gut feeling that it's a heavyweight solution for what seems broadly like a templating problem.

With ksonnet having gone quiet[0] this looks like a promising initiative.

I'd imagine that it'll need something like a package manager (or at least a curated list of common packages) in order to gain good adoption.

[0] - https://blogs.vmware.com/cloudnative/2019/02/05/welcoming-he...


Grafana Labs employee here. We've been using jsonnet-bundler as a package manager on top of our Tanka configs. As for some common packages, we open source most of ours[1]. We also provide Tanka-compatible configs for Loki[2].

[1]: https://github.com/grafana/jsonnet-libs

[2]: https://github.com/grafana/loki/tree/master/production/ksonn...


Thanks a lot! I'll give these a try soon and provide any feedback via github.



Thanks! I had briefly used kustomize in the past, and it looks really useful for creating alternative dev/staging/production environments from a common base.

In the end I decided that I'd collapse all environments down to behave identically, which is simpler, but does add a few constraints for development in particular.

Will take another browse through while considering options for upcoming infrastructure :)


The problem for us monitoring / observability people with kustomize is its limitation to be purely templating for Kubernetes. However we also want to template a lot of things like for example Prometheus configuration. Jsonnet can bridge that gap between the two worlds and in the end generate a ConfigMap YAML file that includes another YAML file Prometheus, as an example.


I tried to use kustomize recently but could not figure out how to write Job/CronJob properly. I wanted to use the configGenerator/secretGenerator, with prod/stage/dev.


I'd also recommended taking a look at using Terraform[1] to manage Kubernetes resources instead of yaml files and kubectl.

[1] https://www.terraform.io/docs/providers/kubernetes/guides/ge...


I agree. Terraform and it's providers are unbelievably powerful for deployments. Being able to create any sort of resource and passing that information onto your kubernetes service to use is really great.

An example would be creating a dynamically names S3 bucket and passing it onto your service to use/manage. The same goes with anything needing credentials. Very powerful.


I usually just deploy "GKE clusters, node pools, AWS route 53, S3 buckets, etc" in terraform. Then take the terraform output and convert it to command line options that I pass to helm when deploying a kubernetes cluster.

I'd be very tempted to just use terraform. But helm, with our forked charts, extensive values.yaml files, etc. has permiated the deployment.

So... rather than try to break up the helm beast; I leave it and keep the separation of concerns (infrastructure vs k8s) as described at the beginning. It works pretty well to be honest is rarely the source of problems.


Something that really helped me convert our helm charts to terraform was this little tool that convert yaml to terraform HCL. I would just render out the helm template and convert it.

Note: It uses the kubernetes terrform provider and only support those objects

https://github.com/sl1pm4t/k2tf/



Interesting, this is similar to jsonnet. Does it support deep merging?

For example, in our case we do `default_deployment.libsonnet` to which we add arbitrary overrides like `default_deployment + dev` or `default_dep + us_central1_overrides` depending on the cluster. And in these case we don't need to pre-specify what can be overriden in the `default_deployment` which makes it really powerful (also can come bite you in the arse though).


As far as I understand Dhall, you have the choice between writing specific types in order to have a safe, actually decent to use interface (which is kind of the point of Dhall), or you can just use a key-value map in order to cut corners while writing.

But honestly, if you're going for the latter, might as well just use Kustomize and accept the chaos.


I'm not sure how it would work with this, since I've never used it, but Dhall does have some kind of recursive merge operator that might work https://docs.dhall-lang.org/references/Built-in-types.html#i....

You might have to overload it somehow to make it work the way you're describing though.


This is a great addition to the ecosystem of Kubernetes application management tools https://docs.google.com/spreadsheets/d/1FCgqz1Ci7_VCz_wdh8vB...

I really hope we'll get to a dominant standard soon. But this subject is much more complex than I thought https://github.com/kubernetes/community/blob/master/contribu...


The shortcomings for Helm are relatively spot on, but I feel like the ship has sailed for tools that aren't Helm based. The ecosystem partners (and thus, end users) have rallied around Helm charts as the defacto manifest format, so a tool that doesn't understand Helm charts will not see a lot of adoption. Are there any plans for Tanka to support importing existing Charts?


(Tanka dev here)

Nobody can neglect the power of helm charts (because so many already exist), so I think Tanka will add support soon.

Even though we are focused on Jsonnet right now, this does not necessarily need to stay that way forever.

Grafana for example is popular because it supports multiple datasources, Tanka should probably do as well (e.g. Jsonnet, CUE, Helm, whatever results in JSON)


There's another jsonnet tool called qbec that can do that - https://qbec.io


Agree, love qbec so much ;)


Same!


I'd like to request that nobody else make any more damn infrastructure tools that require writing code in order to use it, or require reading six manuals. I don't want to spend the rest of my life writing and editing glue and cruft, or spending two weeks researching and writing elaborate config files by hand just to make some software run.

It's like the infrastructure version of fine woodworking - building dovetails and screwless joints by hand, using chisels and hand planes and card scrapers and shit, to build a box. It may be "fun", but it's also needlessly complicated and time-consuming. Give me the power tools, pocket hole jigs, torx screws, nail guns, square clamps. Yes, the dovetails will make a more sturdy box - but do you need a box with dovetails? Probably not.


I'm with you in that these tools are incomplete and insufficiently abstract - what would you like instead? Just a sufficiently robust GUI or CLI surface area?


For example, let's say I want something to "build infrastructure". That's really way too generic to be useful.

What I actually want is something to create a cluster in AWS to run my app, or "create me a Fargate ECS cluster using R53, ACM, ALB, ECR, Lambda, CodeBuild, CloudWatch, SG, VPC, and IAM". If I feed that thing my AWS account's root credentials, it should first go through a list of default variables, explaining each to me and why I might want to change them. Then it should just create all of the above for me, and probably save it all as code. Next, I create a repository with my app and a Dockerfile. I then run a "hook up my git repo webhook and deploy my app right now" script, passing it my github creds. That should create a webhook into CodeBuild where my app is turned into a container, pushed to ECR, and then deployed into the Fargate container, as well as creating any CloudWatch alerts needed.

That's what I call a "product solution": an opinionated, complete solution that does everything I need for me, with a very light user interface and guidance on how to use it. Probably 95% of the above is two custom Terraform modules and some glue, and I should only have to answer like three questions by default.

If Terraform itself shipped with the above complete solution, that would be what I'm looking for. But I'm not aware of a catalog of solutions like that. Occasionally random people will publish partial modules on GitHub, but those usually need more modules and glue to actually work. So I'm basically looking for "no assembly required" solutions with the option to modify them later.


Got it. Yeah my primary complaint about most of the components I've interacted with recently has been too many knobs, exposed through YAML, with either too few opinions or insane opinions.

What's your example of an infrastructure tool you like?


My 2020 prediction: k8 stacks are the new JS frameworks.


Kustomize already won this battle, it's now built into kubectl (-k flag instead of -f). Kustomize is a joy to use, as compared to helm makes me want to smash things.


What are folks thoughts on CurLang these days? Anyone using it for serious configuration yet?

https://cuelang.org/

It’s designed by the BCL/GCL author as a replacement (Jsonnet is apparently a copy of BCL/GCL)


Maintainer of jsonnet-bundler, kube-prometheus and some monitoring mixins, that are all based on jsonnet, here:

Currently we're mostly keeping a close look at CUE, but not really using it as of right now. However, during the holiday break I've been trying to get into CUE again and there are some things I need to figure out before being able to tell how to incorporate or replace some of our jsonnet projects with CUE, if we really want that.

Some parts of CUE seem like an obvious improvement to what jsonnet currently offers. So 2020 will be exciting in that regard.


On the same track as well! CUE looks exciting and we might very much implement add it to Tanka when it proves to be useful.

We have chosen Jsonnet because it already had an ecosystem and served us well, but Tanka is open to other languages as well


It's still early, but incredibly exciting. I personally am betting on CUE as the "winner" in this space. Its creator is enormously credible. Most of the other configuration languages in this space are directly inspired by his work. So for him to work on something new is significant.

I am also impressed by the clip at which CUE is improving, and how useful it already is, in spite of its relatively young age. It reminds me of Python or Go in its focus on tooling quality, and stability.

Their community slack is very welcoming, too. https://cuelang.slack.com


For those who don't already have an account on that slack, then going to https://cuelang.org/community/ appears to be the place one will find the magic invite link: https://join.slack.com/t/cuelang/shared_invite/enQtNzQwODc3N...

off(but actually on)-topic: I can't wait for the use of slack for open source communities to die in a fire, not only because the onboarding experience is horrible, and search is a mess, but about 75% of them appear to be the non-paid flavor so old messages are just held for ransom


I completely agree with you on slack for open-source projects. Especially with Mattermost being so good, there’s no good reason for a project to stick to Slack.


Even Mattermost doesn't offer free accounts to open source projects[1], and I think it's the _hosting_ that keeps projects off of almost any of the open source solutions. But one of the principals of Zulip almost always weighs in on discussions to point out that they have free hosting for any open source project that wants to use it: https://zulipchat.com/for/open-source/#free-hosting-at-zulip... and seem to even offer "import from slack" to make it less painful to switch.

1 = https://mattermost.com/nonprofit/ asks for a $390 "setup fee" and only for 1,000 users


I am using it for k8s deployment and overall am quite happy that I can safely configure deployments without boilerplate. The k8s integration makes it easy to get started with, including using existing yaml configuration. Certainly though you have to invest some time to learn it and I have discovered a few rough edges.

I haven't seen another appealing solution in the space. I recommend using Kustomize if you can, but it is very limited in what you can do with it. When I looked at dhall-kubernetes it was lacking some crucial defaulting features that are getting integrated into the language now.


I don't know why they have to invent something new, if you're just managing kubernetes manifests, kustomize already is good for this task and simple to start with.


Hey we've been using ksonnet since before kustomize existed and we've just rewritten it to be more flexible with much more focus.

Further we use jsonnet a lot (including generating dashboards) and in general found it much more powerful and useful compared to plain YAML. From a customary glance, I can't find if kustomize supports jsonnet.


I didn’t know Jsonnet is a thing, maybe I’m too familiar with YAML, and JSON isn’t my cup of tea. Glad it works for you.


Despite it's name, Jsonnet can generate JSON, YAML, INI, and other formats.


I evaluated a lot of these templating solutions about a year ago. We ended up going with jsonnet and kubecfg as the latter was pretty simple.

Helm felt okay for PnP, but I want to have an explicit understanding of what I’m deploying for infra, and it seemed to abstract too much away.

Kustomize seemed too rigid.

Ksonnet seemed too magical, although I didn’t deeply look.

I still don’t love using jsonnet, as I can’t seem to find full language documentation even on the website for it.

How might this compare to kubecfg to those who might be familiar?


Regarding Jsonnet:

1. Documentation is bad. We know that and work on improving that. Some resources that might help:

- https://tanka.dev/jsonnet/overview: Our own docs include some notes about Jsonnet in general for newcomers

- https://jsonnet.org/learning/tutorial.html: Taking the time to read this entire page opens eyes. Annoying and time consuming, I know but worth it.

Regarding Ksonnet and Kubecfg:

1. Ksonnet was magical. Tanka hopes to be Ksonnet without the magic. We got rid of all of those concepts, parameter merging and whatnot. You have Jsonnet and Tanka, a tool that pushes Jsonnet to Kubernetes. That's it. (ok, you also get a lot of handy features like CLI completion, diff and other things to make your dev experience better)

2. Kubecfg is similar, but has a smaller scope. It evaluates Jsonnet and pipes this to kubectl (basically). At the time we started Tanka, kubecfg was by the way part of the deprecated Ksonnet project, so we assumed it dead as well. Luckily it's not, as it is a very cool project, that inspired Tanka a lot.

Tanka after all aims to be like the `go` command: The one command you need to manage your entire complex kubernetes clusters. Also, Tanka is not strictly limited to a single language. For now we focus on Jsonnet, but more may come in the future


This make me super happy, jsonnet is something that should be used by default in Kubernetes.

Hope this project get a lot of traction!


This looks pretty similar to qbec https://github.com/splunk/qbec


Tanka & jsonnet-bundler also work really well with Prometheus monitoring mixins, meaning we bundle up and share almost all the internal monitoring that we use at Grafana Labs to monitor our massive Cortex, Loki, Metrictank and Kubernetes deploys.


Thanks for sharing, I will try it and give you feedback.

What I am doing for my env clusters is to have a versioned production yaml that acts as a source of truth, then if I need an env (regions, customer, dev, prod, feature, etc..) I take that source of truth, apply a transformation (usually a node script or bash... depending on the kubernetes entity) and then apply the resulting transformed yaml. Basically is: versioned production => transform => new env definitions

Do you have any recommendation/high level thoughts on how to integrate or substitute Tanka in this approach? Which are the downfalls that you see with this approach?

Thanks.


Downfalls of a bash based approach: You need to maintain it. And bash is hard to debug, especially when the house is on fire (production outage, etc)

Integrating should be quite straigthforward. Install Tanka, create a new project (tk init), copy your source of truth YAML (without transformations) somewhere under lib/ (for example lib/foo).

Then go to lib/foo/foo, and import each of those yaml files:

   {
     foo: {
       deployment: import "./deployment.yaml",
       service: import "./service.yaml",
     }
   }
In environments/default/main.jsonnet:

  (import "foo/foo.jsonnet") + {
    // patch environment specific things here
    // https://tanka.dev/tutorial/environments#patching
  }
Then use `tk show` to verify it works.

Furthermore, follow the tutorial to get an in depth understanding of Tanka: https://tanka.dev/tutorial/overview


Thank you, I saw the example on the repository prom-grafana, I think that this example is for focus on the stand alone functionality of Tanka.

Do you know if there is an example or open source cluster using Tanka that I can try on minikube or a test cluster?

it would be really helpful to see how is the workflow to move from feature to feature, how are the envs when there is a bug, how do you replace the volume info from the cloud provider to minikube and all the considerations of the patched envs


We're in the process of evaluating tools to get away from 90% identical yaml files across environments and this seems like a good alternative to kustomize or helm.

Do you have a good pattern on how to use it with CI/CD for deployments? The biggest challenge we've had after writing deployments is getting it setup to work with something like Jenkins (right now we have a custom bash script that does a bunch kubectl things).

(PS any way this would help with static IPs on hosted Grafana.com Cloud to make access to firewalled datasources easier?)


For CI (testing) we use tekton [1] to run tests and pass around artifacts inside k8s. You can kick off and monitor tekton builds with their CLI tool, but we ended up building our own create/monitor/download-artifacts tool for it in ruby. We use an off the shelf CI server to kick that tool off, and it dumps back results the CI server can understand.

One of the things we need to do is elastically scale the number of tasks (basically pods in tekton) that comprise our test suite run. This might be based on cluster utilization or whether it's the master branch. Since we have a single threaded test suite, we hoist parallelism up to the k8s level by breaking apart the tests into partitions each run by its own pod. For this we just render processed and parameterized erb to yaml. Eventually we'll dispense with yaml altogether and programmatically construct resources using a k8s REST api client.

We haven't moved into CD with any of this tooling yet.

[1] https://github.com/tektoncd/pipeline


(Tanka dev here)

I think this highly depends on your deployment process .. do you want full continuous deployment (CI deploying to the cluster)? In this case you could continue to use for example Jenkins to run `tk apply <environment>` on each merged PR.

Another option would be to use an in-cluster CD agent (for example https://fluxcd.io/), which uses Tanka to generate the yaml and applies it. Flux can be used with Tanka, needs some setup though: https://docs.fluxcd.io/en/1.17.0/references/fluxyaml-config-.... I guess we could simplify this in the future.

Feel free to reach out to me on Slack http://slack.raintank.io/ in the #tanka channel :D


Here’s one example, not using any of these “templating turned programing languages” tools:

We build an api and a cli util for these things.

The api takes care of providing sane deployment manifests based on data from a service registry, which is populated using the cli.

Cli can be run non-interactive for use in pipeline and other automations.

The cli and api basically do any config and deplyment related tasks in concert with service discovery (which at this time is consul).

Edit, rant: For automations json just owns yaml in all the ways!

This yaml templating business should stop! I think it’s silly tbh. :)

Our base k8s setup consisted of roughly 40k lines of yaml. Yaml is awesome for 10 lines of user input configuration, not 100, let alone 40k machine printed stuff. What happened here in this brave new DevOps world?!


For integrating k8s deploy into CI / CD, I really like Krane [0]. It has several key features:

  1. A way to templatize YAML (though it could be used with Tanka too - the render command is split from the deploy command for this reason)
  2. Monitors the rollout of resources - it's possible to detect successful and failed deployments more easily.
  3. A way to run one-off commands during deployment.
  4. Uses kubectl under the hood so easy to add to any workflow.
It has other features that I don't use including secrets management.

Net net - I definitely think it could elegantly replace a bunch of bash scripts that do kubectl things.

[0] https://github.com/Shopify/krane


This won't be a popular opinion/implementation.

I'm on dot net. And although I can deploy as microservices ( clean architecture with core, application, Infrastructure and api).

I seem to integrate the api into my app ( Eg. Add the api dll). So my app does the provisioning like a monolith.

It exposes all api controllers by default.

Messaging is internal always then ( domain vs Integration).

Overhead is practically none.

If I have a heavy component/api, I can split up an API and put nginx in front of it for routing and nats for Integration Events

So, basically I have a DDD app at the beginning with the strangler pattern already in place for scaling porpose. Although none of my apps need scaling right now.

I also can do every deployment myself and more easily. Since I don't have a deployment complexity currently.

--

What I don't have, is that my stack is language agnostic at the beginning. But it could be using the same method as scaling, with nginx.

It seems that I have the best of both worlds at the beginning.

- maintainability by forcing DDD

- minimal devops

- testability

- no service mesh overhead ( eg. Consult brings a 30-50ms average overhead, I finish most of my requests in 8-12ms)

- fast development ( slower than monolith, much faster then microservice)

While scaling could be refactored within the day, if an insane amount of request come in ( see: refactoring)

Most microservices are fixed within a single language though. So that's not a concern currently.

The added benefit is, is that I have insane custom implementation options.

I just need to change the Infrastructure in a deployment to use a clients database as a source if a component needs it.

( Eg. An order service for a webshop. I can easily integrate with an clients existing magento for a niche of their shop)

TLDR: I currently don't have a devops overhead. I'm too small for that, I'm glad though.

--

if anyone thinks that isn't a good solution for my use-case ( small dev shop) or have any better ideas. Please share ;)


We use jsonnet at my workplace for all sorts of generated configs. Not just k8s configs. I cannot recommend jsonnet enough. simple and powerful tool.

Jsonnet is a godsend. Don’t use a string templating language for structured data like yaml/json. Use an object templating language like jsonnet. You’ll start to love life again.

We had used mustache templates before and it was a PITA.


Anyone remember the great rebellion of 2011 against XML? Strangely reminiscent.


Yes, and as imperial user it is ironic to see the rebels rebuilding the XML infrastructure in their formats.

Apparently the imperial features are there for good reasons.


At least people were pushing back, instead this YAML disaster comes with plenty of hype.

And it's even more brittle...


I’m a little scared to migrate some of our microservices off VMs and onto k8s (because our deployment story isn’t great). However there doesn’t seem to be a lot of consensus around how to do _anything_, even with a green field.

A dozen microservices, several diff DBs, and some large stateful datasets, all supporting a basic Rest API in the end. What tools would you choose nowadays?


This article seems to have a major factual error. YAML does support “Repetition” with anchors. Am I missing something?


Anchors are fairly limited:

- they are bound to a single file. This won’t help you when trying to maintain multiple similar sets of Config

- anchors do not support patching. If you need to change a nested key, you can’t do so without it affecting all other nested keys as well


Anchors also don't work with a list of elements.

For instance, if we want to share steps in a YAML based CI config we can't if they are a list.


I found that odd too. Typical examples like https://confluence.atlassian.com/bitbucket/yaml-anchors-9601... are pretty easy to map to their example of why they need repetition.


Naive question from someone who doesn't know the ecosystem well:

It seems to me like Terraform is good at describing desired deployment shapes and detecting drift between actual state and desired state.

Can someone clue me into why Terraform hasn't caught on as the abstraction above/that drives K8S?


Kubernetes is very good at keeping track of state and maintaining it. You "just" tell it the state you want and it makes sure it becomes reality (this is the whole point of k8s actually). Terraform is doing the same thing, just for imperative (instead of declarative) APIs (like public clouds, etc).

The issue with Kubernetes is more expressing the state. Kube uses YAML, which quickly becomes verbose and hard to maintain. More on our blogpost: https://grafana.com/blog/2020/01/09/introducing-tanka-our-wa...

Tanka is trying to solve this issue by providing a more powerful language that overcomes these limitations hopefully.


My last company used Terraform to manage Kubernetes. The main issue is that the TF Kubernetes provider supports a limited subset of K8S object types, and of fields within those K8S objects. For example: TF didn't even support Deployment objects until sometime in mid/late 2019 (I may be wrong on timing, but it was long after they were the primary method for general scheduling of long-running containers).

We ended up using TF's Helm provider, sometimes with hacks like a helm chart which deploys an arbitrary YAML file (the so-called "raw" chart). At that point, Terraform is blind to what's actually happening inside K8S. You can still benefit from the ability of TF to pass data from your other infra automation into the Helm charts, of course, but it's really Helm actually managing the configuration of your K8S cluster. And that's the app we all love to hate.

The situation may have been improved, but my conclusion was that it would always be a somewhat incomplete interface.


The terraform provider has caught up a bit in the last 6 months. It is still missing things like CRD support.

For those things we use a direct kubectl yaml provider.

I wish there was an istio provider!


Terraform is really good at describing how infrastructure should be provisioned (VMs, load balancers, dns entries, networking, etc). Provisioning software on a VM and keeping it in a consistent state, however, is not something it's very good at. Userdata is very difficult to do anything complex with (limited size payloads, optimized for uploading a single shell script), and the provisioner system is explicitly described as a "last resort". This makes Terraform not so good at describing how software should be provisioned.

There is a bit of a movement, however, behind using it to deploy software by pairing it with Packer. You use Packer to create an e.g. AMI whose sole job it is to run your software (like a Docker container) then use Terraform to launch a bunch of EC2 instances that have juuuust enough resources to effectively run your software. That'd allow you to eliminate k8s from your stack, though it remains to be seen which stack would be more cost-efficient to run on.


I do not understand what you mean by "how your software should be provisioned"?

I have about 40 kubernetes services all as modules using the kubernetes terraform provider. I think I have 1000+ pods running on our one cluster all deployed through terraform.

It works very well because I can chain infrastructure resources into my service deployments. For example, I can create a dynamically named bucket and pass the name of that bucket as configmap/secret into my service to use.


Check out https://www.terraform.io/docs/providers/template/d/cloudinit... if you want to do something more complex than a single shell script.


This is interesting... not heard of this. Any pointers/links to more info on this?


> 1. Repetition: If information is required in multiple places (dev and prod environments, etc.), all YAML has to offer is copying and pasting those lines.

Actually, YAML has anchors and aliases, which helps a lot when the same thing needs to be reused in several places.


But only as long as it stays in a single file :(. And it does not have support for arrays as well


I’m always wondering if templating is a good approach for solving this problem vs writing a program that generates the api object descriptions for you.


Well, Helm did templating.

Tanka does generating, because it seems to be more robust, as the tool understands the output (instead of string substituting a fragile syntax)

See the docs on more details about generating: https://tanka.dev/tutorial/k-lib


A pretty simple alternative is https://cuelang.org/


Lua in helm3 seems interesting, but it's not prime time yet. That makes me explore other options because helm's limitation in reusable templates is painful. Jsonnet seems to be working for several companies as well as kustomize. I'm still looking for something simple to template my manifests for different environments.


This seems interesting, but would have liked to see dashboard and chart configs cleaned up. Grafana's json configs have the same issue. I have a dashboard for one project. The json is over 13k lines long. Less than 5% of that is unique.


You could use Tanka and Jsonnet to improve this for Grafana dashboards: https://github.com/grafana/grafonnet-lib.

Would that work for you?


Thanks a lot, I'll check that out. I did notice that there were some third party libraries that added functionality that I wanted, but it is nice to see a library from Grafana.

I would really like first class support for templating on Grafana though. My desired workflow would be to use github for the templates source of truth. Then a dsl on Grafana's end would regenerate the actual config whenever the template was changed. The web editor could provide a fallback method of making changes if it could commit to source control. For instance if I saw a cool demo for graph functionality online I could implement it in the UI and then dig into the autogenerated commit in git. That userflow usually works really with teamcity and their kotlin dsl.

Edit: Teamcity gets around the style issues of autogenerated code by submitting changes from the UI as patches. First the kotlin dsl is run on the base config, then patches are applied. That way you are able to rewrite an autogenerated project config in the style and approach you want, and any UI generated changes will be limited to one folder.


So, let me share what we do internally at Grafana Labs:

General:

1. We run everything on Kubernetes

2. We configure it using Tanka, keep all Jsonnet in git

3. Changes are done using PullRequest

---

Grafana (the software) related:

1. We use provisioning: https://grafana.com/docs/grafana/latest/administration/provi.... This means dashboards are kept in .json files on the filesystem and loaded on startup

2. Those dashboard json files are ConfigMaps

3. Those ConfigMaps are created using Tanka

4. The content of those ConfigMaps is created using grafonnet-lib

This means, our dashboards are source-controlled! A change to a dashboard is reviewed, merged and automatically deployed (ConfigMap is changed, Grafana restarted, picks that up and done!)

---

Caveats:

- Edit dashboard in Grafana won't work anymore

- You need to mess with files - BUT this might change in the future, Jsonnet might become integrated to Grafana. Stay tuned :D


Hm, awesome! That looks like it's almost exactly what I want. The one caveat is that this looks system wide, and I'm at a decently sized company. It would be easy for me to have admin access to a dashboard my team owns. But we don't own our grafana implementation, so I'm not sure how easy it would be to make major changes to our grafana setup.


This is an interesting project! But I wonder how it handles custom kubernetes extensions.


Exactly like YAML does.

With Tanka you would usually use the functions provided by `k.libsonnet` to generate your manifests.

For CRD's, there are no helpers available, so you either just write the plain manifest as a Jsonnet object (JSON syntax), as YAML and `import` it (gives you an object as well) or even better write some helper functions yourself, publish them as a library on GitHub and make future users happy :D


Which extensions are you talking about? Do you mean CRDs? Then it take plain JSON or you can write a function to generate those.


Yes, that is why I am talking about. I guess just providing a new object type for your custom extension should work fine.


Am I missing something here or is there no way do delete with tk apply deployed manifests?

Also what about state changes? I.e calculate the diff between your local definition and Cluster state and act appropriately (delete, apply, change)


Delete: No, we need to add that command. For now, use `tk show --dangerous-allow-redirect | kubectl delete -f -`.

Diff: Use `tk diff`. It shows the differences between the local Jsonnet and the cluster. `tk apply` makes them reality afterwards.


Great. Thanks for the response :)


You are welcome! Feel free to reach out via Slack if you have more questions: https://grafana.slack.com on the #tanka channel


I'm not a big fan of helm, but using json syntax instead of yaml sounds like getting shot in the leg. Json as far as I'm aware never meant to be human readable or human writable.


Hi, you won't have much contact with JSON at all.

While Jsonnet is technically a superset, all you will see during use is most probably function calls and imports.

You won't have to actually write the Kubernetes objects anymore, they are generated using helper functions, just like real programming languages would do.


I would argue the same about yaml and whitespace being meaningful. Oh you forgot a single space? Your markup is screwed. Xml is much better...


I spend more time writing and reading files than figuring out syntax errors. Also yaml has comments.


Fair enough.


cue[0] might be another possible language for this problem

[0]: https://cuelang.org/


when I learned helm, I instantly thought "wait, this is it? it just templates yaml files, how does this even have a name?"


Getting a HSTS cert error on the site they link in the blog post - tanka.dev :(


Oh, too bad :(

The site is hosted on Netlify and seems to be working for most other people.

Maybe reset your browser cache, check your network or try on your phone using cellular data instead?

If the issue persists, I'll take a closer look :D


Seems to only be an issue on the corp wifi. I'm able to load it over cell network without issue!


Glad to hear that :D


> 1. Repetition: If information is required in multiple places (dev and prod environments, etc.), all YAML has to offer is copying and pasting those lines.

Actually, YAML has anchors and aliases, which help a lot when the same thing needs to be reused in several places.


Ksonnet re-borned?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: