
Kubernetes YAML Generator - paulstovell
https://k8syaml.com/
======
tn890
I wonder what the need for tools such as this or other "Kubernetes-by-example"
type pages tell us about the complexity of configuring Kubernetes resources.

Do we need a better layer of abstraction, i.e. better adoption and tighter
integration for something like kustomize? Have we fucked up completely with
Kubernetes due to it being outrageously complicated for simple tasks? How we
redesign this to be simpler? Is the complexity even a problem for the target
audience?

I've no idea. I just know I'm a kubernetes admin and I can't write a
deployment yaml without googling or copy/pasting.

~~~
kevinmgranger
I think of Kubernetes as "the new Operating System", and these complex
resources as fiddling with initscripts, fstab, /etc/interfaces, and so on.
Writing an operator is like writing your own initscript.

I wouldn't be surprised if we eventually see new abstractions for "you just
want a plain 'ol deployment with CI/CD du jour, a persistent volume claim, and
a service ingress, just like 90% of all other CRUD webapps? Sure, here's a
simple resource for that."

I think we'll start seeing a move towards more "opinionated" tools, just to
outsource some of the decision making. No sense learning how to write your own
pipelines if you can find a tool that says "we're gonna deploy every time you
make a git tag and run `mvn package`, you figure the rest out".

~~~
oauea
Helm has been doing this since the early days. 99% of our charts are created
using:

* `helm create` to get the default scaffold

* modify a handful of entries in the generated values file

* done!

Only thing is the default helm chart starter does not allow for
autoconfiguring of volumes, and since we're porting a lot of stateful apps to
kubernetes we just modified the default starter to include that capability.

Of course it would be nice to not have to maintain a bunch of different
virtually identical templates.

Pulumi looks interesting but unfortunately seems to insist on vendor lock-in
(see the jerk-around on
[https://github.com/pulumi/pulumi/pull/2697](https://github.com/pulumi/pulumi/pull/2697)).
So I'm looking forward to the AWS CDK
([https://aws.amazon.com/cdk/](https://aws.amazon.com/cdk/)) maturing a bit.

~~~
wronglebowski
As someone attempting to port a docker-compose application to a Helm chart I'd
love to hear more about the resources you used. As someone working with a very
simple application I've found the processes to be more difficult than I
anticipated.

~~~
oauea
Kubernetes will be far more complex than docker compose. It'll let you do
much, much more also.

Some potentially useful tips:

* One docker container usually maps to one pod, which is created by a deployment. You can put multiple containers in a single pod, but this couples them together tightly.

* Use services to assign hostnames to pods

* Have pods communicate to each other using the service names. This works the same as putting them in the same docker network.

* Volumes depend on your cloud provider or if you're running on bare metal. In the cloud it's easy, you just request one and it gets created on demand and backed by a cloud disk.

* If you're using volumes, you probably want to use the Recreate updatePolicy for your deployment. This will ensure the old pod is shutdown before creating a new one. Which is necessary to work with block volumes.

When using helm start with a `helm create CHARTNAME` and take a look at what
it generated for you. You'll get some heavily templated yaml that if you're
lucky you will barely have to touch.

But it's best to go through these tutorials and learn how to use the basic
building blocks of kubernetes directly:
[https://kubernetes.io/docs/tutorials/kubernetes-
basics/](https://kubernetes.io/docs/tutorials/kubernetes-basics/)

Once you're familiar with what a Deployment, Service, Ingress and
PersistentVolumeClaim are you can use helm to template this for you where
necessary.

------
gravypod
Super cool to see these sorts of tools. They're great for learning the kube
API.

For production-y things however, some meta-config language that allows
deterministic templateing would be a huge improvement. It allows you to make
sweeping/uniform infrastructure changes from a single library or tool change.

Kubecfg is a good example of the basics one could implement [0] although it's
examples aren't as fully fledged and organized as they could be.

[0] -
[https://github.com/bitnami/kubecfg/blob/master/examples/gues...](https://github.com/bitnami/kubecfg/blob/master/examples/guestbook.jsonnet)

~~~
baq
heard good things about [https://dhall-lang.org/#](https://dhall-lang.org/#)

~~~
zingplex
I've used Dhall in production, pushed it fairly hard and can say with utmost
certainty it's be an absolute pleasure. We use it as our application
configuration format and derive fairly intricate Kubernetes resources from our
app config.

Although it is still in it's early days, it still is excellent to use and will
only get better with additional tooling.

~~~
k__
How does it differ from the CDK/CDK8s/terraform-cdk?

~~~
zingplex
CDK uses existing fully featured programming languages that can preform side
effecting actions thus lacking reproducibility and that are may not terminate.
Dhall is a total language, meaning it will always terminate and a pure
language, meaning that a function given an input will always yield the same
output. That makes the output extremely predictable.

------
1ba9115454
We're using Pulumi [https://www.pulumi.com/](https://www.pulumi.com/) to do
our K8 configuration.

We can use TypeScript interfaces (which give us nice ide code completion) to
define our yaml.

we can then create functions where we would normally duplicate Yaml. Really
nice. [https://www.pulumi.com/kubernetes/](https://www.pulumi.com/kubernetes/)

~~~
FridgeSeal
Are you not worried about people writing arbitrary code to do stuff? I've been
burnt before where devs used Turing complete languages (python in my case) to
generate configs in probably the most convoluted and complicated manner
possible. It was impossible to debug and understand, there were side-effects
literally everywhere. It was everything you'd imagine from a normal bit of bad
code, but it also happened to spin up hardware.

~~~
shimmerjs
As long as the code is generating something like configs, you can write guard
rail sanity check tests against the output, or apply linters, etc.

~~~
FridgeSeal
But now you’re writing code to generate config and code to verify your
configs.

------
paulstovell
We built this internally to test that the UI we built for Kubernetes
deployments was producing the expected YAML. Thought it would be useful to
share.

~~~
WalterSobchak
Thanks for sharing this useful resource!

Paul, as an aside, we absolutely love how feature packed Octopus is nowadays.
We have been using it since 2.0, and I don't think we will ever give it up.
Thanks for making one of the best tools we use a daily basis!

~~~
paulstovell
Thanks for the kind words!

------
1f60c
I don’t know how to feel about Kubernetes configuration apparently being so
complicated that you need a generator for it, instead of just having the docs
and your IDE open in split screen like with Docker Compose.

That said, this still looks cool. I just hope we won’t need a Kubernetes
configuration generator generator anytime soon.

~~~
someguy101010
Its not complicated, which makes it easy to generate. No reason not to make
things easier to work with if you can.

------
x87678r
Would be nice to have a linter or something that you know is best practice. Eg
if you check your yaml and its more complex than it could be.

~~~
programd
There are a number of such tools out there. Here's a short list. I'd be
interested in any experiences people have had with them in largish production
environments.

[https://www.kubeval.com/](https://www.kubeval.com/)

[https://github.com/zegl/kube-score](https://github.com/zegl/kube-score)

[https://stelligent.github.io/config-
lint/#/](https://stelligent.github.io/config-lint/#/)

[https://github.com/cloud66-oss/copper](https://github.com/cloud66-oss/copper)

[https://www.conftest.dev/](https://www.conftest.dev/)

[https://github.com/FairwindsOps/polaris#cli](https://github.com/FairwindsOps/polaris#cli)

~~~
zegl
Hey, I'm the author of kube-score, and originally built the tool to support an
organization using Kubernetes at a fairly large scale as measured in number of
engineers, services, and Kubernetes clusters.

I'm obviously biased, but it's been hugely successful! kube-score is working
very well out of the box, and there's only a handful of cases where the
"ignore" annotation has been used to disable a check that's too strict for the
particular use case.

Feel free to reach out if you have any questions or comments.

------
elcritch
That looks super handy! It seems like it'd be possible to "port" to VSCode.
Some other comments mention autocomplete in VSCode, but it'd be nifty to run
something like this directly in the VSCode UI.

------
Hamish156
We built a similar thing and used in our graphical k8s designer :
[https://k8od.io/](https://k8od.io/).

------
Niksko
Ooooh nice! When I saw the title I was hoping this is what it would be. I'd
dreamed of building something like this, but never had the time or the buy in.

Aside from making it easy to generate k8s manifests, this could also be a
great learning tool. If you allowed this to generate multiple resources that
are linked, it could be a great illustration of how different resources fit
together.

------
burgerquizz
This is awesome. It is so easy to make mistakes when configuring your k8s
services

------
laurensr
It would be nice if the tool could "import" an existing YAML and show the
resulting configuration in the GUI.

------
emaildanwilson
Awesome work for the deployment resource! Now do this for all the other native
k8s types and popular custom resources. :)

------
Schatel
Awesome! Was waiting for someone to build it :-)

------
rglover
This is great, thank you!

------
Ciantic
Looks very useful. In my opinion the configuration format should have been
built like this from the get go.

Clear schema (like TypeScript interfaces or something similar) which allows
generating an UI.

~~~
q3k
All of the standard Kubernetes resource definitions have a schema [1] [2]
which is already used to, for instance, generate Go code [3] and documentation
[4].

[1] - [https://kubernetes.io/docs/concepts/overview/kubernetes-
api/...](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-
specification)

[2] -
[https://github.com/kubernetes/kubernetes/blob/release-1.19/a...](https://github.com/kubernetes/kubernetes/blob/release-1.19/api/openapi-
spec/swagger.json)

[3] - [https://godoc.org/k8s.io/client-
go/kubernetes/typed/core/v1](https://godoc.org/k8s.io/client-
go/kubernetes/typed/core/v1)

[4] - [https://kubernetes.io/docs/reference/generated/kubernetes-
ap...](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/)

~~~
Ciantic
Of course there is schema at some level, but it feels like it's not utilized.
This UI seems like very good use of schema. Another would be if I open up an
VSCode, there should be a way to autocomplete / or give error squiggles if I
write something wrong, in realtime.

~~~
q3k
Right, but that's not as much of a Kubernetes issue as a VSCode issue.
Everything's there for you to write something like this for your text editor,
open source code doesn't grow on trees :).

~~~
Ciantic
My point being if k8s configuration was developed UI centrically, this problem
would not exist in the first place. Because there would be official K8s GUI
for configuration, one would not hunt and beg bits and pieces of docs every
time editing a config file.

As a side note, usually the schema languages fail at some point, thus I
referred to TypeScript interfaces, which are very flexible way to write
validation.

There is already a schema support for JSON validation in VSCode, one can use
it using `{ "$schema" :
"[https://example.com/url/toschema"](https://example.com/url/toschema") }` but
it uses JSON Schema format, which I think is not accurate enough in edge cases
for UI generation.

~~~
p_l
If it was developed UI-centric, it would have failed much earlier to do
anything useful.

The real power of Kubernetes is that it has very simple basic model that
allows building complex constructs out of simple pieces, and in fact basic
help (which is even used for UI in _kubectl explain_.

All the bits for good UI support are there.

