Hacker News new | past | comments | ask | show | jobs | submit login

Plain k8s has a fearsome reputation as being complex to deploy, which I don't think is quite deserved. It isn't totally straightforward, but the documentation does tend to make it sound a bit worse than it actually is.

I run a couple of small clusters and my Ansible script for installing them is pretty much:

  * Set up the base system. Set up firewall. Add k8s repo. Keep back kubelet & kubeadm.
  * Install and configure docker.
  * On one node, run kubeadm init. Capture the output.
  * Install flannel networking.
  * On the other nodes, run the join command that is printed out by kubeadm init.
Running in a multi-master setup requires an extra argument to kubeadm init. There are a couple of other bits of faffing about to get metrics working, but the documentation covers that pretty clearly.

I'm definitely not knocking k3s/microk8s, they're a great and quick way to experiment with Kubernetes (and so is GKE).




I remember about 5 years ago I tried to deploy it on CoreOS using the available documentation and literally couldn't get it working.

I haven't done a manual deployment since. I hope it got significantly better and I may be an idiot but the reputation isn't fully undeserved.

The problem back then was also that this was usually the first thing you had to do to try it out. Doing a complicated deployment without knowing much about it doesn't make it any easier.


Same here. I just wanted to play with it for my toy projects and personal services, so I didn't really push a whole lot, but it just felt like there were too many moving parts to figure out. I didn't need autoscaling or most of the advanced features of k8s, so I just went back to my libvirt-based set of scripts.


I run kubernetes on a home server, but it took me a couple weeks of testing and trial and error to arrive at a setup I was happy with, and I already had experience of K8S in the cloud. At the time I was stuck without a work laptop, so had time to self-educate, but normally I wouldnt have that kind of time to sink in.


Deploying a Kubernetes cluster isn't really to complex, it doesn't even take that long. It's the long term maintenances that concerns me.


This concerns me too. What should I be worrying about? The main maintenance problem that I have experienced so far is that automatic major version updates can break the cluster (which is why I now keep back those packages). Are there other gotchas that I'm likely to experience?


Version updates don't normally break the cluster, in my experience, but it might break things like Helm charts.

The thing that concerns me the most is managing the internal certificates and debugging networking issues.


> managing the internal certificates

I haven't yet set it up, but https://github.com/kontena/kubelet-rubber-stamp is on my list to look at.

> debugging networking issues

In this regard, I have had much more success with flannel than with calico. The BGP part of calico was relatively easy to get working, but the iptables part had issues in my set-up and I couldn't understand how to begin debugging them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: