I wrote some scripting around it so people can claim their own cluster via SSH. I'm planning to write a post about it soon and make the code available.
The thing I like most with Kind: Being inside Docker makes Kind very ephemeral. Every time I start it up I get a fresh cluster. I know where everything is and it doesn’t contaminate my machine.
Since some of the authors are on the thread I would like say thank you. I really appreciated the recent improvements to kubectl-integration and the addition of local storage.
In the future I would like it to be easier to play with pod and network policies, reduced cluster startup time and reduced node image sizes.
Keep up the good work!
In comparison k3s takes seconds to start a cluster. Kind takes about a minute. Neither will consume resources to a point where my computer becomes unusable.
I reported my experience to Red Hat and they replied that it was to be expected.
EDIT: Found the issue https://github.com/code-ready/crc/issues/617.
1) I run a training course on container security. We moved from using straight kubeadm on the student's VMs to using kind clusters. the advantage here being we can customize different clusters for different scenarios by providing a kind config file on start-up. We can also have multiple clusters running on a single VM easily with no interferance between them.
2) when evaluating software or trying out a feature, it's really nice to be able to spin up a test cluster in < 2 minutes and try it out, then it's just "kind delete cluster" to get rid of it again.
when I compare it to other options (e.g. minikube, microk8s etc) it subjectively feels less "magic" to me, in that it's just one or more Docker containers, running kubeadm, so as long as you understand those two things, you can get a picture of what's going on.
I'd love to hear why anyone preferes any other solution for local development/experimentation.
microk8s is really cool! We wanted kind for development of kubernetes itself and I don't think microk8s was around at the time.
One difference besides being able to build & run arbitrary Kubernetes versions is being able to run on mac, windows, and Linux instead of only where snap is supported.
We're paying more attention to local development of applications now, expect some major improvements soon :-)
Great news if improvements are coming.
However, I've wasted so much time over the past 2 years trying to figure out why something wasn't working, only to find out it was because there were differences between the k8s distro I was using, and our production system. Ultimately I found the best solution was deploying exactly what we run in production on some spare bare metal I had laying around (after adding a hundred gigs of RAM).
Luckily we have a production setup that is designed to run on-prem, so this was an option for me. Regardless, I think having as close to production as possible will make your life easier.
That being said, I still might try this project out.
As a sibling comment mentions, there are a number of differences between distributions/implementations - and especially when new to k8s it's way too easy to waste time trying to figure out why something doesn't work.
Also, you can't beat the one-line snap install for Microk8s.
We have been considering having a desktop -> production cluster on k3s.
My kubectl-fu is not strong enough to fix it, so for me that was a dealbreaker.
Though I am super passionate about k3s and support the hell out of everything Rancher Labs does, so by no means did it leave a bad taste in my mouth.
So the dev -> production experience is less than ideal ?
We are planning to use k3s for local development and deploy to EKS...so this is interesting.
The main thing is that documentation for microk8s and the design seems aimed at "behave as/pretend to be a real k8s" - including things like ingress.
But I still think microk8s is easier to spin up on a workstation.
I'm new to k8s (I'm on swarm right now) and looking for ease of dev setup than anything else
Seriously though, this is a valid point. Still, I believe you could run it in a vm? I'm not on win/Mac, and I'm not sure if that would make sense.
So, namespaces are task level things in the kernel. (Every thread is a task, and by default every process has one thread, so every process is also at least one task.)
https://elixir.bootlin.com/linux/latest/source/include/linux... (That's where the task_struct starts and it has an nsproxy member.)
I think the "best" depends on what you're doing to be honest, (e.g. if you only develop on ubuntu, check out microk8s too! they have some good ideas, eg focusing on straight-forward support for a local registry instead of side-loading) and there's a _lot_ of room to improve kind, but the vote of confidence is still very nice to see :-)
Linux -> k3s (build a cluster or single node via https://k3sup.dev)
MacOS/Windows -> k3s (runs k3s in a Docker container, and is super fast)
That said, if you're hacking on K8s code, then KinD has nice features to support that workflow. KinD is only suitable for dev, k3s can run in prod also, try both, compare. They are both easy to use.
I’m developing Kubernetes controllers and the Custom Resources represent the bits of cloud infrastructure ( https://crossplane.io ). So when I lose the kind cluster, I have to go and delete each and every resource in AWS :( I am unhappily forced to use minikube until support comes to kind.
If this refers to https://github.com/kubernetes-sigs/kind/issues/148, the good news is that we're most of the way there and I'm going back to work on this now, ideally out in a v0.8 in the next week or so.
None of us run on Linux which means we're all using VMs for our containers and we all use Docker Desktop for various things. That meant we're running extra local VMs for no good reason. With kind I can just use the one vm for all the container things.
But the real reason for the actual switch was I just kept running into things that minikube couldn't do and Kind could, as well as having things I had decided to ignore like the fact that minikube does everything on one node which is 100% unnatural for kubernetes and I had multiple cases where this setup blinded me to problems that would occur in a real cluster.
3) I've also found I prefer the configuration/customization approach of kind over minikube though admittedly that's kind of a small thing.
Ultimately I find kind is a better simulator for the purpose of prototyping future cluster changes as well as use as a local "lab" for diagnosing services in a "production like" environment 100% under your control.
- You can run it on Github actions. So, you can test in your CI pipeline.
- You can run any recent version of Kubernetes.
- Kind can start a Kubernetes cluster under a minute on a developer machine.
kind was originally built for developing kubernetes itself, as a cheaper option for testing changes to the core components.
it wasn't really meant to compete with minikube et. al, but complement for differing usage, but you may now find it useful as a lightweight option with a slightly different feature set.
it's also the only local cluster that is fully conformant as far as I know, because conformance tests involve verifying multi-node behavior, at the time minikube did not support
- building kubernetes from a checkout and running it
- docker based nodes
- multiple nodes per cluster
These days they've gotten more similar, we're both shipping docker and podman based nodes.
I think one of the most interesting things about kind is that the entire kubernetes distro is packed into a single "node" docker image, it's very easy to work with fully offline.
You say that "ingress in kind is a little trickier than in the above platforms" with no explanation.
I feel disappointed and frustrated. :(
For me, use Docker if you want k8s started up every time you start Docker, and easy ingress. I don't love having a cluster always running, so I'm keeping the k8s function off by default.
Use kind if you want multi-node clusters, and a production-like simulation of your environment.
Use minikube for a straightforward dev experience, where you have control over k8s version, resource allocation, and don't need meaningful configuration of the control plane.
it's certainly heavier than _not_ using Kubernetes
figuring this out would make a lot of people happy, but it doesn't rank highly for our current use cases versus other work.
I run kind in the minimum docker for mac spec which is one core / 1gb and it performs just fine. We've worked hard to make it lightweight, including a KEP upstream for slimming down the binaries.