
Thoughts on Kubernetes - grey-area
https://blog.nelhage.com/post/kubernetes/
======
Veratyr
I too played around with k8s but my issues were even more basic: It's darn
near impossible to deploy it on a platform that deviates from their
expectations.

The docs seem to expect that:

\- If you're not letting managed services do most of the work for you, you're
running on one of CoreOS, Ubuntu or a RHEL derivative

\- You know all about "CNI" and how it magically makes everything work

\- You know that the default "security" is none at all

Deployment is just so much harder than it should be. Fundamentally (I
discovered far later on in the process), Kubernetes is comprised of roughly
the following services: kube-apiserver, kubelet, kube-proxy, kube-scheduler,
kube-controller-manager. The other dependencies are: A CA infrastructure for
certificate based authentication, etcd, a container runtime (rkt or Docker)
and CNI.

The steps for setting up a simple installation should be able to fit on a
single page. If it weren't for a bug crippling my cluster, I would have done
it myself (it seemed to be running but a bug with cadvisor's disk space
detection messed it up to the point where it was useless).

To get a handle on what I mean, have a look at the docs for Ceph, another
reasonably complex distributed system. Here are the manual installation docs:
[http://docs.ceph.com/docs/master/install/manual-
deployment/](http://docs.ceph.com/docs/master/install/manual-deployment/).
It's a list of commands to run and a basic idea of what the config files
should look like. Following that guide actually results in a running Ceph
cluster. Now have a look at the manual installation docs for Kubernetes:
[https://kubernetes.io/docs/getting-started-
guides/scratch/](https://kubernetes.io/docs/getting-started-guides/scratch/).
It's a bunch of links to other parts of the docs, pretty much no practical
guidance on how to _actually_ set up the cluster and the few commands that
have been suggested are mostly outdated (for example "\--configure-cbr0"
doesn't exist on kubelet anymore). Following it _can_ result in a working
cluster but only with _a lot_ of additional work and study (it doesn't give
you anything on networking, which is essential).

~~~
gebi
You'd either use the ansible or salt recipes for k8s to install. Everything is
nicely documented there.

There are a few very nasty pitfalls on the way to your k8s install in
production though...

We deploy kubernetes on hundreds of nodes in multiple locations and i've to
say i've never ever looked at any install docs. ACK, they are basically
useless, but overall i find the documentation quite good (except not up to
date with the code on may locations).

What's severly missing is the overall picture!

But lets not stop there, if you want to have k8s running properly in
production you practically have to make all containers yourself, otherwise
some shitty container from dockerhub will bring your node down one after
another (hint: read-only containers).

------
majewsky
> I was unable to find a way to force kubernetes to do a “no-op” redeploy just
> to get it to re-pull a tag.

A deployment redeploys (i.e. creates a new ReplicaSet and rolling-updates to
it) whenever its YAML changes. So the common way to do a "no-op" redeploy
(assuming `imagePullPolicy: Always` on the containers) is to add an annotation
to the PodSpec that has no actual meaning, but which you can modify to trigger
the creation of and migration to a new ReplicaSet. For example, you could put
a timestamp, or you just count it up every time you re-deploy.

I would discourage usage of unchanging image tags, however. When you have
immutable image versions, you will have a much easier time rolling back to a
previous release if the new release breaks. At $work, we use human-readable
timestamps most of the time, e.g. `201702210945`, that signify when the image
was built.

> The final question I don’t have a satisfying answer for is how to deploy
> configuration data into my docker images.

Did the author look at
[https://github.com/kubernetes/helm](https://github.com/kubernetes/helm)?

> Kubernetes secrets are stored in plaintext in etcd, which is fine for many
> applications but would probably scare me at a certain scale.

Yeah, that's indeed a problem. The best you can do right now is wrap access to
Kubernetes and etcd in as many layers of firewall and two-factored SSH and
whatnot as you feel are necessary.

NOTE: Answer was updated multiple times while I read through the submission.

------
tzury
I can relate to that.

As we run > 500 instances on AWS and GCP combined, we gave K8S a try, to see
if we can get a shorter+simpler+cheaper way for our production.

K8S on GKE (Google Container Engine) turned out not to be a good fit for our
needs at the moment. Instead, we found ways to get more efficient with
platfrom's provided provisioning API to gain all what we wanted with K8S in a
far mature env.

------
madarcho
> "However, I have no idea how to build such trees of images using
> Dockerfiles."

Wait, isn't the layering one of the whole points of Docker? You build that
base image then use that tag in the other three docker files that'll then
build on top of that.

The author does then mention issues with tagging, and I can relate to that
more. Our team ended up with a cludgy mess of tags and we all had some form of
a docker rmi command aliased for quick use every now and then. The build
server had a tough time of its own

------
sheeshkebab
k8s is a pain on AWS. Its much nicer on gcp but it looks like even there it's
got issues.

Although it's great as an open source project - it's fun to finally have a
nice distributed computing platform that works anywhere. Docker swarm is great
too.

~~~
tuananh
i'm just a developer and i found it every easy to deploy a k8s cluster with
kops.

