
Tanka: Our way of deploying to Kubernetes - robfig
https://grafana.com/blog/2020/01/09/introducing-tanka-our-way-of-deploying-to-kubernetes/
======
markbnj
In our experience running production workloads on k8s for over three years
templating (helm) and structured editing approaches both have their place, and
both are valuable. We don't feel the need to replace declarative approaches
with another imperative language, or to use complicated helm charts for
straightforward service deployments.

There are many ways to classify workloads, but one big distinction that we
find valuable is between stable infrastructure components and our own rapidly
deployed services. The former have complicated configuration surfaces but
change relatively rarely, while the latter are usually much simpler (because
microservices) and change daily in many cases.

We find helm works very well for the infrastructure pieces. Yes it's
complicated and easy to get wrong, but so are most other package management
systems. Charts for complicated things can be quite dense and hard to
comprehend. See the stable prometheus chart for an excellent example. But once
the work is done and as long as there is commitment to maintain it the results
can be pretty awesome. We use helm charts to install and upgrade prometheus,
fluentd, elasticsearch, nginx, etcd and a ton of other tools. Yes we've had to
fork some charts because the configuration surface wasn't fully exposed, but
they are a minority.

For our own services charts are overkill. They're hard to read, and crufted up
with control structures and substitution vars. Essentially all of our
microservices are deployments with simple ingress requirements. We currently
use kustomize to process base manifests that live in the service repos and
combine them with environment-specific patches from config repos. Both are
just straight yaml and very easy for backend developers to read and
understand, and different groups (i.e. devops, sre, backend dev) can
contribute patches to the config repos to control how the services are
deployed.

Bottom line: if you're going all-in on kubernetes, which you really need to do
to get the most benefit from it, then you're going to need more than one
approach to deploying workloads.

~~~
sandstrom
For anyone curious, this is what the prometheus chart look like (I think):

[https://github.com/helm/charts/blob/master/stable/prometheus...](https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml)

~~~
markbnj
That's the values file. The actual manifest templates are just as horrifyingly
impressive :).

[https://github.com/helm/charts/tree/master/stable/prometheus...](https://github.com/helm/charts/tree/master/stable/prometheus/templates)

~~~
ithkuil
One of the main problems with helm IMO is that it effectively turns the
values.yaml file in a structure that almost mirrors the manifests themselves,
and the biggest problem of that is hidden in the word "almost": there are many
subtle ways this mapping can be done, including subtle or not so subtle field
naming/spelling differences.

~~~
markbnj
Yeah absolutely, the multiple layers of indirection you can end up with are
frustrating for back end engineers trying to understand how a thing got
deployed the way it did. For our stuff just patching the environment in as a
yaml block is way more comprehensible. But then when you want to let people
one-line a complete elasticsearch cluster install helm has that power if
you're willing to commit to it.

~~~
ithkuil
I found my personal sweet spot in kubecfg, which allows me to override the
fields I need (without having to know which one advance) and when needed it
can do more advanced stuff such as variables, expressions, loops, etc which
allow one-line complete elastic search cluster-like scenarios

------
cbushko
I spent about 6 months using helm and made around 20+ charts for the services.

In the end we got rid of it and replaced it with Terraform. If your
infrastructure is 100% kubernetes then I think helm is great. Our
infrastructure is not. We have databases, dns, buckets, service accounts and
more so we were splitting our setup between terraform and helm. Passing data
between the two tools was going to be a pain. We follow a layered approach of
building up the infrastructure.

1) Networking: DNS 2) secrets, service accounts, buckets 3) DBs 4) Pre-
application config (istio) 5) Services

Semi-related things are together and all of those cloud provider values we
need are saved as secrets. We are on GCP so that means we need things like
service accounts to access GCP resources (buckets, cloudsql) and all of those
variables are available to our services to pick up.

And Terraform has STATE. This is unbelievably valuable when doing continuous
delivery as you can tell what changed on every deploy and deploys are FAST.
One thing that really bugged me about helm was that determining if a deploy
failed was a post helm event. We were going to have to write monitoring for
service health/uptime on deploy. This is not hard at all but you get it for
free with terraform. If a service failed to start, terraform will throw an
error...

I don't think people know that Terraform has a kubernetes provider. It does
not support all the alpha objects but has decent support for 99% of the things
you need. I wish someone made a provider for istio virtual services and
service entries.

~~~
MuffinFlavored
Why don't more people use Terraform? I think Terraform is amazing. I wanted to
make containerpen.io for docker-compose + dockerfiles + terraform files to
compete with codepen.io

~~~
nojvek
My experience with Terraform was suboptimal. At least with GCS, it would do
weird things and mess up permissions.

I found the documentation and it’s special syntax a pain. Just give me json
with json schemas defines somewhere so vscode can give me great code
completion and I can templatize as necessary

~~~
MuffinFlavored
> Just give me json with json schemas defines somewhere so vscode can give me
> great code completion and I can templatize as necessary

Is there something that does this in the wild?

------
jka
Very interesting. I've been reluctant to adopt Helm for Kubernetes resource
management because of a gut feeling that it's a heavyweight solution for what
seems broadly like a templating problem.

With ksonnet having gone quiet[0] this looks like a promising initiative.

I'd imagine that it'll need something like a package manager (or at least a
curated list of common packages) in order to gain good adoption.

[0] - [https://blogs.vmware.com/cloudnative/2019/02/05/welcoming-
he...](https://blogs.vmware.com/cloudnative/2019/02/05/welcoming-heptio-open-
source-projects-to-vmware/)

~~~
nouney
Take a look into kustomize

[https://github.com/kubernetes-sigs/kustomize](https://github.com/kubernetes-
sigs/kustomize)

~~~
jka
Thanks! I had briefly used kustomize in the past, and it looks really useful
for creating alternative dev/staging/production environments from a common
base.

In the end I decided that I'd collapse all environments down to behave
identically, which is simpler, but does add a few constraints for development
in particular.

Will take another browse through while considering options for upcoming
infrastructure :)

~~~
MetalMatze
The problem for us monitoring / observability people with kustomize is its
limitation to be purely templating for Kubernetes. However we also want to
template a lot of things like for example Prometheus configuration. Jsonnet
can bridge that gap between the two worlds and in the end generate a ConfigMap
YAML file that includes another YAML file Prometheus, as an example.

------
anaphor
What about something like [https://github.com/dhall-lang/dhall-
kubernetes](https://github.com/dhall-lang/dhall-kubernetes) ?

~~~
gouthamve
Interesting, this is similar to jsonnet. Does it support deep merging?

For example, in our case we do `default_deployment.libsonnet` to which we add
arbitrary overrides like `default_deployment + dev` or `default_dep +
us_central1_overrides` depending on the cluster. And in these case we don't
need to pre-specify what can be overriden in the `default_deployment` which
makes it really powerful (also can come bite you in the arse though).

~~~
mixedCase
As far as I understand Dhall, you have the choice between writing specific
types in order to have a safe, actually decent to _use_ interface (which is
kind of the point of Dhall), or you can just use a key-value map in order to
cut corners while writing.

But honestly, if you're going for the latter, might as well just use Kustomize
and accept the chaos.

------
sytse
This is a great addition to the ecosystem of Kubernetes application management
tools
[https://docs.google.com/spreadsheets/d/1FCgqz1Ci7_VCz_wdh8vB...](https://docs.google.com/spreadsheets/d/1FCgqz1Ci7_VCz_wdh8vBitZ3giBtac_H8SBw4uxnrsE/edit#gid=0)

I really hope we'll get to a dominant standard soon. But this subject is much
more complex than I thought
[https://github.com/kubernetes/community/blob/master/contribu...](https://github.com/kubernetes/community/blob/master/contributors/design-
proposals/architecture/declarative-application-management.md)

------
candiddevmike
The shortcomings for Helm are relatively spot on, but I feel like the ship has
sailed for tools that aren't Helm based. The ecosystem partners (and thus, end
users) have rallied around Helm charts as the defacto manifest format, so a
tool that doesn't understand Helm charts will not see a lot of adoption. Are
there any plans for Tanka to support importing existing Charts?

~~~
ivan4th
There's another jsonnet tool called qbec that can do that -
[https://qbec.io](https://qbec.io)

~~~
kvaps
Agree, love qbec so much ;)

~~~
jpvmsplk
Same!

------
peterwwillis
I'd like to request that nobody else make any more damn infrastructure tools
that require writing code in order to use it, or require reading six manuals.
I don't want to spend the rest of my life writing and editing glue and cruft,
or spending two weeks researching and writing elaborate config files _by hand_
just to make some software run.

It's like the infrastructure version of fine woodworking - building dovetails
and screwless joints by hand, using chisels and hand planes and card scrapers
and shit, to build a box. It may be "fun", but it's also needlessly
complicated and time-consuming. Give me the power tools, pocket hole jigs,
torx screws, nail guns, square clamps. Yes, the dovetails will make a more
sturdy box - but do you _need_ a box with dovetails? _Probably not._

~~~
blandflakes
I'm with you in that these tools are incomplete and insufficiently abstract -
what would you like instead? Just a sufficiently robust GUI or CLI surface
area?

~~~
peterwwillis
For example, let's say I want something to "build infrastructure". That's
really way too generic to be useful.

What I _actually_ want is something to create a cluster in AWS to run my app,
or "create me a Fargate ECS cluster using R53, ACM, ALB, ECR, Lambda,
CodeBuild, CloudWatch, SG, VPC, and IAM". If I feed that thing my AWS
account's root credentials, it should first go through a list of default
variables, explaining each to me and why I might want to change them. Then it
should just create all of the above for me, and probably save it all as code.
Next, I create a repository with my app and a Dockerfile. I then run a "hook
up my git repo webhook and deploy my app right now" script, passing it my
github creds. That should create a webhook into CodeBuild where my app is
turned into a container, pushed to ECR, and then deployed into the Fargate
container, as well as creating any CloudWatch alerts needed.

That's what I call a "product solution": an opinionated, complete solution
that does everything I need for me, with a very light user interface and
guidance on how to use it. Probably 95% of the above is two custom Terraform
modules and some glue, and I should only have to answer like three questions
by default.

If Terraform itself shipped with the above complete solution, _that_ would be
what I'm looking for. But I'm not aware of a catalog of solutions like that.
Occasionally random people will publish partial modules on GitHub, but those
usually need more modules and glue to actually work. So I'm basically looking
for "no assembly required" solutions with the _option_ to modify them later.

~~~
blandflakes
Got it. Yeah my primary complaint about most of the components I've interacted
with recently has been too many knobs, exposed through YAML, with either too
few opinions or insane opinions.

------
wikibob
What are folks thoughts on CurLang these days? Anyone using it for serious
configuration yet?

[https://cuelang.org/](https://cuelang.org/)

It’s designed by the BCL/GCL author as a replacement (Jsonnet is apparently a
copy of BCL/GCL)

~~~
MetalMatze
Maintainer of jsonnet-bundler, kube-prometheus and some monitoring mixins,
that are all based on jsonnet, here:

Currently we're mostly keeping a close look at CUE, but not really using it as
of right now. However, during the holiday break I've been trying to get into
CUE again and there are some things I need to figure out before being able to
tell how to incorporate or replace some of our jsonnet projects with CUE, if
we really want that.

Some parts of CUE seem like an obvious improvement to what jsonnet currently
offers. So 2020 will be exciting in that regard.

~~~
shorez
On the same track as well! CUE looks exciting and we might very much implement
add it to Tanka when it proves to be useful.

We have chosen Jsonnet because it already had an ecosystem and served us well,
but Tanka is open to other languages as well

------
mbushey
Kustomize already won this battle, it's now built into kubectl (-k flag
instead of -f). Kustomize is a joy to use, as compared to helm makes me want
to smash things.

------
Jedi72
My 2020 prediction: k8 stacks are the new JS frameworks.

------
a012
I don't know why they have to invent something new, if you're just managing
kubernetes manifests, kustomize already is good for this task and simple to
start with.

~~~
gouthamve
Hey we've been using ksonnet since before kustomize existed and we've just
rewritten it to be more flexible with much more focus.

Further we use jsonnet a lot (including generating dashboards) and in general
found it much more powerful and useful compared to plain YAML. From a
customary glance, I can't find if kustomize supports jsonnet.

~~~
a012
I didn’t know Jsonnet is a thing, maybe I’m too familiar with YAML, and JSON
isn’t my cup of tea. Glad it works for you.

~~~
sciurus
Despite it's name, Jsonnet can generate JSON, YAML, INI, and other formats.

------
SirensOfTitan
I evaluated a lot of these templating solutions about a year ago. We ended up
going with jsonnet and kubecfg as the latter was pretty simple.

Helm felt okay for PnP, but I want to have an explicit understanding of what
I’m deploying for infra, and it seemed to abstract too much away.

Kustomize seemed too rigid.

Ksonnet seemed too magical, although I didn’t deeply look.

I still don’t love using jsonnet, as I can’t seem to find full language
documentation even on the website for it.

How might this compare to kubecfg to those who might be familiar?

~~~
shorez
Regarding Jsonnet:

1\. Documentation is bad. We know that and work on improving that. Some
resources that might help:

\- [https://tanka.dev/jsonnet/overview](https://tanka.dev/jsonnet/overview):
Our own docs include some notes about Jsonnet in general for newcomers

\-
[https://jsonnet.org/learning/tutorial.html](https://jsonnet.org/learning/tutorial.html):
Taking the time to read this entire page opens eyes. Annoying and time
consuming, I know but worth it.

Regarding Ksonnet and Kubecfg:

1\. Ksonnet was magical. Tanka hopes to be Ksonnet without the magic. We got
rid of all of those concepts, parameter merging and whatnot. You have Jsonnet
and Tanka, a tool that pushes Jsonnet to Kubernetes. That's it. (ok, you also
get a lot of handy features like CLI completion, diff and other things to make
your dev experience better)

2\. Kubecfg is similar, but has a smaller scope. It evaluates Jsonnet and
pipes this to kubectl (basically). At the time we started Tanka, kubecfg was
by the way part of the deprecated Ksonnet project, so we assumed it dead as
well. Luckily it's not, as it is a very cool project, that inspired Tanka a
lot.

Tanka after all aims to be like the `go` command: The one command you need to
manage your entire complex kubernetes clusters. Also, Tanka is not strictly
limited to a single language. For now we focus on Jsonnet, but more may come
in the future

------
eloycoto
This make me super happy, jsonnet is something that should be used by default
in Kubernetes.

Hope this project get a lot of traction!

------
aboyne42
This looks pretty similar to qbec
[https://github.com/splunk/qbec](https://github.com/splunk/qbec)

------
netingle
Tanka & jsonnet-bundler also work really well with Prometheus monitoring
mixins, meaning we bundle up and share almost all the internal monitoring that
we use at Grafana Labs to monitor our massive Cortex, Loki, Metrictank and
Kubernetes deploys.

------
vicjicama
Thanks for sharing, I will try it and give you feedback.

What I am doing for my env clusters is to have a versioned production yaml
that acts as a source of truth, then if I need an env (regions, customer, dev,
prod, feature, etc..) I take that source of truth, apply a transformation
(usually a node script or bash... depending on the kubernetes entity) and then
apply the resulting transformed yaml. Basically is: versioned production =>
transform => new env definitions

Do you have any recommendation/high level thoughts on how to integrate or
substitute Tanka in this approach? Which are the downfalls that you see with
this approach?

Thanks.

~~~
shorez
Downfalls of a bash based approach: You need to maintain it. And bash is hard
to debug, especially when the house is on fire (production outage, etc)

Integrating should be quite straigthforward. Install Tanka, create a new
project (tk init), copy your source of truth YAML (without transformations)
somewhere under lib/ (for example lib/foo).

Then go to lib/foo/foo, and import each of those yaml files:

    
    
       {
         foo: {
           deployment: import "./deployment.yaml",
           service: import "./service.yaml",
         }
       }
    

In environments/default/main.jsonnet:

    
    
      (import "foo/foo.jsonnet") + {
        // patch environment specific things here
        // https://tanka.dev/tutorial/environments#patching
      }
    

Then use `tk show` to verify it works.

Furthermore, follow the tutorial to get an in depth understanding of Tanka:
[https://tanka.dev/tutorial/overview](https://tanka.dev/tutorial/overview)

~~~
vicjicama
Thank you, I saw the example on the repository prom-grafana, I think that this
example is for focus on the stand alone functionality of Tanka.

Do you know if there is an example or open source cluster using Tanka that I
can try on minikube or a test cluster?

it would be really helpful to see how is the workflow to move from feature to
feature, how are the envs when there is a bug, how do you replace the volume
info from the cloud provider to minikube and all the considerations of the
patched envs

------
ecliptik
We're in the process of evaluating tools to get away from 90% identical yaml
files across environments and this seems like a good alternative to kustomize
or helm.

Do you have a good pattern on how to use it with CI/CD for deployments? The
biggest challenge we've had after writing deployments is getting it setup to
work with something like Jenkins (right now we have a custom bash script that
does a bunch kubectl things).

(PS any way this would help with static IPs on hosted Grafana.com Cloud to
make access to firewalled datasources easier?)

~~~
transect
For CI (testing) we use tekton [1] to run tests and pass around artifacts
inside k8s. You can kick off and monitor tekton builds with their CLI tool,
but we ended up building our own create/monitor/download-artifacts tool for it
in ruby. We use an off the shelf CI server to kick that tool off, and it dumps
back results the CI server can understand.

One of the things we need to do is elastically scale the number of tasks
(basically pods in tekton) that comprise our test suite run. This might be
based on cluster utilization or whether it's the master branch. Since we have
a single threaded test suite, we hoist parallelism up to the k8s level by
breaking apart the tests into partitions each run by its own pod. For this we
just render processed and parameterized erb to yaml. Eventually we'll dispense
with yaml altogether and programmatically construct resources using a k8s REST
api client.

We haven't moved into CD with any of this tooling yet.

[1]
[https://github.com/tektoncd/pipeline](https://github.com/tektoncd/pipeline)

------
NicoJuicy
This won't be a popular opinion/implementation.

I'm on dot net. And although I can deploy as microservices ( clean
architecture with core, application, Infrastructure and api).

I seem to integrate the api into my app ( Eg. Add the api dll). So my app does
the provisioning like a monolith.

It exposes all api controllers by default.

Messaging is internal always then ( domain vs Integration).

Overhead is practically none.

If I have a heavy component/api, I can split up an API and put nginx in front
of it for routing and nats for Integration Events

So, basically I have a DDD app at the beginning with the strangler pattern
already in place for scaling porpose. Although none of my apps need scaling
right now.

I also can do every deployment myself and more easily. Since I don't have a
deployment complexity currently.

\--

What I don't have, is that my stack is language agnostic at the beginning. But
it could be using the same method as scaling, with nginx.

It seems that I have the best of both worlds at the beginning.

\- maintainability by forcing DDD

\- minimal devops

\- testability

\- no service mesh overhead ( eg. Consult brings a 30-50ms average overhead, I
finish most of my requests in 8-12ms)

\- fast development ( slower than monolith, much faster then microservice)

While scaling could be refactored within the day, if an insane amount of
request come in ( see: refactoring)

Most microservices are fixed within a single language though. So that's not a
concern currently.

The added benefit is, is that I have insane custom implementation options.

I just need to change the Infrastructure in a deployment to use a clients
database as a source if a component needs it.

( Eg. An order service for a webshop. I can easily integrate with an clients
existing magento for a niche of their shop)

TLDR: I currently don't have a devops overhead. I'm too small for that, I'm
glad though.

\--

if anyone thinks that isn't a good solution for my use-case ( small dev shop)
or have any better ideas. Please share ;)

------
exabrial
Anyone remember the great rebellion of 2011 against XML? Strangely
reminiscent.

~~~
pjmlp
Yes, and as imperial user it is ironic to see the rebels rebuilding the XML
infrastructure in their formats.

Apparently the imperial features are there for good reasons.

------
xref
I’m a little scared to migrate some of our microservices off VMs and onto k8s
(because our deployment story isn’t great). However there doesn’t seem to be a
lot of consensus around how to do _anything_, even with a green field.

A dozen microservices, several diff DBs, and some large stateful datasets, all
supporting a basic Rest API in the end. What tools would you choose nowadays?

------
nojvek
We use jsonnet at my workplace for all sorts of generated configs. Not just
k8s configs. I cannot recommend jsonnet enough. simple and powerful tool.

Jsonnet is a godsend. Don’t use a string templating language for structured
data like yaml/json. Use an object templating language like jsonnet. You’ll
start to love life again.

We had used mustache templates before and it was a PITA.

------
netfl0
This article seems to have a major factual error. YAML does support
“Repetition” with anchors. Am I missing something?

~~~
shorez
Anchors are fairly limited:

\- they are bound to a single file. This won’t help you when trying to
maintain multiple similar sets of Config

\- anchors do not support patching. If you need to change a nested key, you
can’t do so without it affecting all other nested keys as well

~~~
emptysea
Anchors also don't work with a list of elements.

For instance, if we want to share steps in a YAML based CI config we can't if
they are a list.

------
idan
Naive question from someone who doesn't know the ecosystem well:

It seems to me like Terraform is good at describing desired deployment shapes
and detecting drift between actual state and desired state.

Can someone clue me into why Terraform hasn't caught on as the abstraction
above/that drives K8S?

~~~
nyxxie
Terraform is really good at describing how infrastructure should be
provisioned (VMs, load balancers, dns entries, networking, etc). Provisioning
software on a VM and keeping it in a consistent state, however, is not
something it's very good at. Userdata is very difficult to do anything complex
with (limited size payloads, optimized for uploading a single shell script),
and the provisioner system is explicitly described as a "last resort". This
makes Terraform not so good at describing how _software_ should be
provisioned.

There is a bit of a movement, however, behind using it to deploy software by
pairing it with Packer. You use Packer to create an e.g. AMI whose sole job it
is to run your software (like a Docker container) then use Terraform to launch
a bunch of EC2 instances that have juuuust enough resources to effectively run
your software. That'd allow you to eliminate k8s from your stack, though it
remains to be seen which stack would be more cost-efficient to run on.

~~~
cbushko
I do not understand what you mean by "how your software should be
provisioned"?

I have about 40 kubernetes services all as modules using the kubernetes
terraform provider. I think I have 1000+ pods running on our one cluster all
deployed through terraform.

It works very well because I can chain infrastructure resources into my
service deployments. For example, I can create a dynamically named bucket and
pass the name of that bucket as configmap/secret into my service to use.

------
doru1980
> 1\. Repetition: If information is required in multiple places (dev and prod
> environments, etc.), all YAML has to offer is copying and pasting those
> lines.

Actually, YAML has anchors and aliases, which helps a lot when the same thing
needs to be reused in several places.

~~~
shorez
But only as long as it stays in a single file :(. And it does not have support
for arrays as well

------
jupp0r
I’m always wondering if templating is a good approach for solving this problem
vs writing a program that generates the api object descriptions for you.

~~~
shorez
Well, Helm did templating.

Tanka does generating, because it seems to be more robust, as the tool
understands the output (instead of string substituting a fragile syntax)

See the docs on more details about generating:
[https://tanka.dev/tutorial/k-lib](https://tanka.dev/tutorial/k-lib)

------
rudolph9
A pretty simple alternative is [https://cuelang.org/](https://cuelang.org/)

------
clvx
Lua in helm3 seems interesting, but it's not prime time yet. That makes me
explore other options because helm's limitation in reusable templates is
painful. Jsonnet seems to be working for several companies as well as
kustomize. I'm still looking for something simple to template my manifests for
different environments.

------
chris11
This seems interesting, but would have liked to see dashboard and chart
configs cleaned up. Grafana's json configs have the same issue. I have a
dashboard for one project. The json is over 13k lines long. Less than 5% of
that is unique.

~~~
shorez
You could use Tanka and Jsonnet to improve this for Grafana dashboards:
[https://github.com/grafana/grafonnet-
lib](https://github.com/grafana/grafonnet-lib).

Would that work for you?

~~~
chris11
Thanks a lot, I'll check that out. I did notice that there were some third
party libraries that added functionality that I wanted, but it is nice to see
a library from Grafana.

I would really like first class support for templating on Grafana though. My
desired workflow would be to use github for the templates source of truth.
Then a dsl on Grafana's end would regenerate the actual config whenever the
template was changed. The web editor could provide a fallback method of making
changes if it could commit to source control. For instance if I saw a cool
demo for graph functionality online I could implement it in the UI and then
dig into the autogenerated commit in git. That userflow usually works really
with teamcity and their kotlin dsl.

Edit: Teamcity gets around the style issues of autogenerated code by
submitting changes from the UI as patches. First the kotlin dsl is run on the
base config, then patches are applied. That way you are able to rewrite an
autogenerated project config in the style and approach you want, and any UI
generated changes will be limited to one folder.

~~~
shorez
So, let me share what we do internally at Grafana Labs:

 _General:_

1\. We run everything on Kubernetes

2\. We configure it using Tanka, keep all Jsonnet in git

3\. Changes are done using PullRequest

\---

 _Grafana (the software) related:_

1\. We use provisioning:
[https://grafana.com/docs/grafana/latest/administration/provi...](https://grafana.com/docs/grafana/latest/administration/provisioning/).
This means dashboards are kept in .json files on the filesystem and loaded on
startup

2\. Those dashboard json files are ConfigMaps

3\. Those ConfigMaps are created using Tanka

4\. The content of those ConfigMaps is created using grafonnet-lib

This means, our dashboards are source-controlled! A change to a dashboard is
reviewed, merged and automatically deployed (ConfigMap is changed, Grafana
restarted, picks that up and done!)

\---

 _Caveats:_

\- Edit dashboard in Grafana won't work anymore

\- You need to mess with files - BUT this might change in the future, Jsonnet
might become integrated to Grafana. Stay tuned :D

~~~
chris11
Hm, awesome! That looks like it's almost exactly what I want. The one caveat
is that this looks system wide, and I'm at a decently sized company. It would
be easy for me to have admin access to a dashboard my team owns. But we don't
own our grafana implementation, so I'm not sure how easy it would be to make
major changes to our grafana setup.

------
csunbird
This is an interesting project! But I wonder how it handles custom kubernetes
extensions.

~~~
gouthamve
Which extensions are you talking about? Do you mean CRDs? Then it take plain
JSON or you can write a function to generate those.

~~~
csunbird
Yes, that is why I am talking about. I guess just providing a new object type
for your custom extension should work fine.

------
denvrede
Am I missing something here or is there no way do delete with tk apply
deployed manifests?

Also what about state changes? I.e calculate the diff between your local
definition and Cluster state and act appropriately (delete, apply, change)

~~~
shorez
Delete: No, we need to add that command. For now, use `tk show --dangerous-
allow-redirect | kubectl delete -f -`.

Diff: Use `tk diff`. It shows the differences between the local Jsonnet and
the cluster. `tk apply` makes them reality afterwards.

~~~
denvrede
Great. Thanks for the response :)

~~~
shorez
You are welcome! Feel free to reach out via Slack if you have more questions:
[https://grafana.slack.com](https://grafana.slack.com) on the #tanka channel

------
loriverkutya
I'm not a big fan of helm, but using json syntax instead of yaml sounds like
getting shot in the leg. Json as far as I'm aware never meant to be human
readable or human writable.

~~~
arpa
I would argue the same about yaml and whitespace being meaningful. Oh you
forgot a single space? Your markup is screwed. Xml is much better...

~~~
slig
I spend more time writing and reading files than figuring out syntax errors.
Also yaml has comments.

~~~
arpa
Fair enough.

------
adieu
cue[0] might be another possible language for this problem

[0]: [https://cuelang.org/](https://cuelang.org/)

------
mml
when I learned helm, I instantly thought "wait, this is it? it just templates
yaml files, how does this even have a name?"

------
dcchambers
Getting a HSTS cert error on the site they link in the blog post - tanka.dev
:(

~~~
shorez
Oh, too bad :(

The site is hosted on Netlify and seems to be working for most other people.

Maybe reset your browser cache, check your network or try on your phone using
cellular data instead?

If the issue persists, I'll take a closer look :D

~~~
dcchambers
Seems to only be an issue on the corp wifi. I'm able to load it over cell
network without issue!

~~~
shorez
Glad to hear that :D

------
doru1980
> 1\. Repetition: If information is required in multiple places (dev and prod
> environments, etc.), all YAML has to offer is copying and pasting those
> lines.

Actually, YAML has anchors and aliases, which help a lot when the same thing
needs to be reused in several places.

------
xmly
Ksonnet re-borned?

