
Kind - run local Kubernetes clusters using Docker - supdatecron
https://kind.sigs.k8s.io/
======
PascalW
We recently used Kind for a k8s workshop. We provisioned a beefy cloud server
and ran 15 3-node Kind clusters on it, so everyone had it's own k8s instance
without having to install anything locally. It worked absolutely great for
this purpose.

I wrote some scripting around it so people can claim their own cluster via
SSH. I'm planning to write a post about it soon and make the code available.

~~~
wood4brick
Please do, I'd like to give a talk like that once Covid is over

------
tofflos
I’ve tried minikube, microk8s, the one bundled with Docker Desktop for
Windows, k3s and Red Hat CodeReady. Of these I had the best experience with
Kind (by far) and the worst experience with CodeReady (also by far).

The thing I like most with Kind: Being inside Docker makes Kind very
ephemeral. Every time I start it up I get a fresh cluster. I know where
everything is and it doesn’t contaminate my machine.

Since some of the authors are on the thread I would like say thank you. I
really appreciated the recent improvements to kubectl-integration and the
addition of local storage.

In the future I would like it to be easier to play with pod and network
policies, reduced cluster startup time and reduced node image sizes.

Keep up the good work!

~~~
raesene9
Out of curiousity, as I'm thinking of looking at CodeReady for openshift, what
problems did you have there?

~~~
blinkingled
OP is right about CodeReady - I’ll unhesitatingly say it’s a pos. It’s way too
heavyweight for even a high end laptop. It’s single node only. It’s falls over
if you enable monitoring unless you can give it 8 cores and 12GB, then it sort
of works but is too slow. The 3 times I tried deploying the provided samples -
they didn’t work out of box. It also requires you to download new release
every month I think - no in place updates I think.

------
raesene9
I'm a great fan of kind, it's made my life so much easier for a couple of use
cases.

1) I run a training course on container security. We moved from using straight
kubeadm on the student's VMs to using kind clusters. the advantage here being
we can customize different clusters for different scenarios by providing a
kind config file on start-up. We can also have multiple clusters running on a
single VM easily with no interferance between them.

2) when evaluating software or trying out a feature, it's really nice to be
able to spin up a test cluster in < 2 minutes and try it out, then it's just
"kind delete cluster" to get rid of it again.

when I compare it to other options (e.g. minikube, microk8s etc) it
subjectively feels less "magic" to me, in that it's just one or more Docker
containers, running kubeadm, so as long as you understand those two things,
you can get a picture of what's going on.

------
e12e
I've recently started prototyping our move to k8s - and my recommendation is
stay away from minikube, k3s and kind. Kind looks the best on paper. But
canonical has done great with [https://microk8s.io/](https://microk8s.io/)

I'd love to hear why anyone preferes any other solution for local
development/experimentation.

~~~
BenTheElder
Hi, kind author :-)

microk8s is really cool! We wanted kind for development of kubernetes itself
and I don't think microk8s was around at the time.

One difference besides being able to build & run arbitrary Kubernetes versions
is being able to run on mac, windows, and Linux instead of only where snap is
supported.

We're paying more attention to local development of applications now, expect
some major improvements soon :-)

~~~
e12e
That's great news. In my experience kind was a bit resource heavy - but more
importantly didn't seem to have clear documentation that was geared towards
local testing (for users/consumers of k8s).

Great news if improvements are coming.

------
amolo
So does this mean you can run containers in containers orchestrating other
containers. Containers must really be the holy grail of serverless and cloud
"nativeness".

~~~
dodobirdlord
Seems reasonable to me. Outside of some weird edge cases and some
"technically..."s a container is just a process with its own namespace and
file system, and maybe it's own IP. If we didn't have shared-filesystem,
shared-namespace, shared-ports processes for historical reasons, who would be
clamoring to add them? Why wouldn't you run everything in a container,
container-scheduler included?

~~~
jjtheblunt
isn't it more accurate to say, rather that just a process, a process group
with its own process numbering?

~~~
pas
Technically you can define which namespaces to inherit and which ones to
create "from scratch" at process initialization time. (Actually there's an
unshare() syscall that does it, but clone() is the standard way to create new
namespaces and new processes in them, plus there's setns() to put a thread
into some other namespace given a fd pointing to that NS.)

So, namespaces are task level things in the kernel. (Every thread is a task,
and by default every process has one thread, so every process is also at least
one task.)

[https://elixir.bootlin.com/linux/latest/source/include/linux...](https://elixir.bootlin.com/linux/latest/source/include/linux/sched.h#L629)
(That's where the task_struct starts and it has an nsproxy member.)

------
moondev
kind is incredible. It's the best option for local multi-node clusters and
very fast. No hypervisor needed, only docker.

~~~
BenTheElder
Appreciate the feedback :-)

I think the "best" depends on what you're doing to be honest, (e.g. if you
only develop on ubuntu, check out microk8s too! they have some good ideas, eg
focusing on straight-forward support for a local registry instead of side-
loading) and there's a _lot_ of room to improve kind, but the vote of
confidence is still very nice to see :-)

~~~
moondev
It's the best in my book :) KIND + CAPI are truly magical. I appreciate all
your hard work on the project, thank you!

------
alexellisuk
I like KinD, but find k3s much faster to bootstrap and lighter-weight too.
Rancher have gone GA with it and provide commercial support, Darren Shepherd
also tracks the latest k8s release very closely.

Linux -> k3s (build a cluster or single node via
[https://k3sup.dev](https://k3sup.dev)) MacOS/Windows -> k3s (runs k3s in a
Docker container, and is super fast)

That said, if you're hacking on K8s code, then KinD has nice features to
support that workflow. KinD is only suitable for dev, k3s can run in prod
also, try both, compare. They are both easy to use.

~~~
mmgutz
[k3d]([https://github.com/rancher/k3d](https://github.com/rancher/k3d)) runs
k3s in docker. k3d is more lightweight than kind.

------
dr01d
Used kind + skaffold for 6 months and it was pretty solid. However, eventually
switched to k3d and tilt and feeling like this combo is amazing. Cluster takes
2 seconds to create now.

~~~
metzby
Sweet; glad you like it. I'd love to hear more. (Disclaimer: Tilt CEO here)

------
MrBuddyCasino
Kind has been a godsend for me. When you've got a 16Gb MBP with both Docker
and K8S running, re-using the Docker virtual machine makes a big difference in
memory and CPU usage. Thanks to the team!

------
monus
I really wanted to use kind but the fact that it loses all the data after
restart/sleep of computer keeps me from using it.

I’m developing Kubernetes controllers and the Custom Resources represent the
bits of cloud infrastructure ( [https://crossplane.io](https://crossplane.io)
). So when I lose the kind cluster, I have to go and delete each and every
resource in AWS :( I am unhappily forced to use minikube until support comes
to kind.

~~~
BenTheElder
loses all the data because you have to start a new cluster? The data should be
persisted...

If this refers to [https://github.com/kubernetes-
sigs/kind/issues/148](https://github.com/kubernetes-sigs/kind/issues/148), the
good news is that we're most of the way there and I'm going back to work on
this now, ideally out in a v0.8 in the next week or so.

------
dickeytk
what are the advantages over minikube?

~~~
richards
FWIW, I did a comparison of k8s in Docker, KinD, and minikube last week ...
[https://seroter.wordpress.com/2020/03/10/lets-look-at-
your-o...](https://seroter.wordpress.com/2020/03/10/lets-look-at-your-options-
for-local-development-with-kubernetes/)

~~~
mleonhard
I really wish you had used a regular service definition when testing KinD. The
omission reduces the usefulness of your comparison. I want to choose a local
k8s cluster that is as close to production as possible. And I want my local
deployment configs to be as close as possible to production.

You say that "ingress in kind is a little trickier than in the above
platforms" with no explanation.

I feel disappointed and frustrated. :(

~~~
richards
Sure. I cheated. I specifically didn't feel like setting up extraPortMappings
([https://github.com/kubernetes-
sigs/kind/issues/808](https://github.com/kubernetes-sigs/kind/issues/808)),
and then create an ingress controller
([https://kind.sigs.k8s.io/docs/user/ingress/](https://kind.sigs.k8s.io/docs/user/ingress/)).
Not difficult, just not turnkey availability like the other two.

------
malkia
As a chromebook user with crostini, I'm wondering if this is going to work for
me - as neither k3s, nor minikube, or minishift did (due to limitations in
crostini).

~~~
darren0
k3s author here. With user namespace and rootless support it is getting closer
to k3s running in crostini, but nobody is really working on this. I was a big
fan of crostini when it came out but the insistence on lxc and user namespaces
makes it too limited and wouldn't recommend it if you work with containers.

------
AdamGibbins
Be warned, Kind is incredibly heavy weight. It wants over 8GB on my laptop
just to start, and pins cores for 10 minutes.

~~~
nif2ee
Are you sure you are talking about Kind and not minikube for instance? for me
Kind is the most efficient way of running a real cluster on my machine, a bare
cluster merely takes 600MB for RAM in my case and the creation takes too long
only in the first time because it downloads the docker images.

~~~
tbrock
And people complain about electron! 600mb for a rest api and a bunch of
networking hoopla over containerd, what a mess Kubernetes is...

