
Ask HN: What are your biggest pain points working with Kubernetes? - kodebrew
I&#x27;ve worked with Kubernetes at a small startup now and we&#x27;ve had some challenges with it. Wiring up a deployment pipeline and figuring out how to work effectively with Helm took us some time. We also had to invest a lot more than we expected in working with other developers to teach Kubernetes concepts. It really, in hindsight, was probably overkill for our use case.<p>What are some of the biggest challenges that you&#x27;ve had in working with Kubernetes? How did you solve some of them?
======
sdrinf
Minor (but biggest for us) painpoint: CI pipeline tooling in a microservice
environment. This breaks down into a few parts:

* CI tooling (ie for git push master to be automatically deployed to kubernetes) requires eg git + travis which is somewhat expensive for eg personal development, or manual devops work, which is somewhat expensive work-hours wise

* setup time for each microservice is just on the boundary where it happens infrequently enough to not get scripted, but each new one takes an hour to set up manually

* CI deployment time for eg docker builds on Travis can take 3-5 minutes, which is not great if prod breaks

~~~
KohgnaK
* CI tooling (ie for git push master to be automatically deployed to kubernetes) requires eg git + travis which is somewhat expensive for eg personal development, or manual devops work, which is somewhat expensive work-hours wise

We're using gitlab which can integrate and manage with one kubernetes cluster
with the free Gitlab CE licence. Now we did the minimal integration with it
(as in we're still deploying our own tiller) to be able to use kubectl and
helm from within gitlab-ci.yml scripts. It works quite nicely especially to
test stuff in a personal capacity or in dev/staging.

* CI deployment time for eg docker builds on Travis can take 3-5 minutes, which is not great if prod breaks

We had the same issue that we somehow solved by building new images on top of
existing ones to reduce build time and having sensible image tagging so we
always have a rollback at hand without rebuilding inside gitlab's registry.
This has been proved useful more than once when dealing with production
systems.

* setup time for each microservice is just on the boundary where it happens infrequently enough to not get scripted, but each new one takes an hour to set up manually

That's also something we struggled with so we extended the time we were
allocating to building the helm charts and so on but still no gold. Only semi-
effective counter measure we found to this is to work on an internal helm
scripts boilerplate of some sort to base all the projects on. But it helps
since we are working on projects close one to another and with "preselected"
technologies. But yeah i feel the pain on that one too.

------
leandot
My recommendation is to avoid introducing Helm at the beginning, it adds
another layer of complexity that feels too abstract, k8s yaml is confusing
enough for new teams. Also Helm makes deployments depend on local
configuration, which might result in confusing, non-reproducible situations.

------
quickthrower2
Probably that there is a lot to learn and it feels like a full time role, so
looking after a small cluster say 2 hours/month I feel like I’m hacking to
solve problems and not getting the deep understanding a full time sysadmin
person would. Using a cloud provided solution and terraform and CI helps a lot
though. I have a clone test cluster to play around with before I deploy to
production.

I went from never have run docker build to maintaining docker k8s VMs running
the modes, terraforms cloud load balancer so quite a learning curve. Took me
ages to realise that to configure nginx ingress I needed to work with the
docker image and not try to set it up via the terraform k8s config. That was a
painful lesson. Not having those mental models! Doing a free online course on
k8s helped a lot, after that I reached critical mass of knowledge and it got a
bit easier.

~~~
kodebrew
I found the same - I had to actually launch my own cluster and build my own
terraform scripts from scratch to really understand how it works. Basically
this:

[https://github.com/kelseyhightower/kubernetes-the-hard-
way](https://github.com/kelseyhightower/kubernetes-the-hard-way)

Really helped me, especially going through it multiple times.

------
p0d
Grumpy old sysadmin says that datacenters used to cause downtime, now it's
dodgey deployments and kubernetes abusing dns as well as requiring more
certificates than a small isp.

------
alexnewman
Teaching developers who already knew linux and ssh that learning kubectl was
worth the investment.

~~~
kodebrew
I think we had this issue as well. I kind of hoped turning over kubectl to the
team would be a lightbulb moment and everyone would just get it. It really
wasn't. Did a lot of training but I think that we mostly ended up writing our
own tooling wrapping everything.

