
A story about a Kubernetes migration - marcelo_lebre
https://medium.com/unbabel/unbabel-migrated-to-kubernetes-and-you-wont-believe-what-happened-next-b39f082def1c
======
zedpm
It sounds like they really wanted to switch to K8s and rationalized it. The
cons of their existing solution are minor and easily addressed with correct
use of Ansible, and the massive complexity of K8s is understated.

As an example, they suggest that there's a heavy cognitive load associated
with having devs run some Ansible playbooks, and then argue that to avoid
that, they just have to introduce an entirely new toolchain via workshops and
tutorials. Right.

~~~
halbritt
Regardless of your skepticism, the benefits are real.

Scaling applications in k8s, updating, and keeping configs consistent are a
great deal easier for me than using Ansible or any other config management
tool.

In the end, it's a singular platform that one can build tooling against that
allows an organization to abstract away the infrastructure. My team has done
that (on top of k8s). As such, a developer can spin up a new environment with
the click of a button, deploy whatever code they like, scale the environment,
etc. with very little to no training. Those capabilities were a tremendous
accelerator for my organization.

Sure, you can build something similar with Ansible on AWS, but then you're
married to AWS, you have to worry about sizing, and the cost of idle
instances. In my experience, it's just a great deal more overhead.

~~~
falcolas
With ECS running on Fargate, idle instances don't exist. Throw in service
autoscaling, and you have a simple scaling solution with no K8s cluster
management required.

~~~
halbritt
Given the adoption of Kubernetes, would you honestly recommend that someone
seriously consider ECS?

~~~
zedpm
Yeah, I would recommend it in certain circumstances. EKS has $150/month base
cost for the control plane, so for small environments it's too expensive. For
groups with existing experience with Docker and Docker Compose but no
experience with Kubernetes, it's fairly easy to get things working on ECS. If
you don't have a whole ops team with time to build out all the tooling to make
k8s use easy for devs, then again k8s probably isn't the right answer.

Kubernetes is great, but you're not being honest with yourself if you can't
acknowledge the difficulty in going from 0 to production-ready. There is a ton
of complexity and lots of grief on the path to a fully functional k8s
environment.

~~~
halbritt
That last part, I wholly agree with. The learning curve is steep and
difficult.

------
mosselman
I am setting up a swarm deployment of one of my apps as an experiment and I
must say the learning curve is hardly there. I tried kubernetes, but I found
that most resources that try to explain how it works are focussing too much on
github-size deployments. I just want 2 instances of my app, a database and
traefik with lets encrypt. Does anyone know of a proper resource for the 'just
a tad more than dokku' size?

~~~
shawabawa3
Setting up a kubernetes cluster itself is probably the biggest hurdle. Also,
bear in mind if it's just for a single service the resource overhead of
kubernetes may be significant, possibly even more than 50%.

I'd strongly recommend using a hosted k8s - either GKE, EKS, or I believe
digital ocean have just released one.

If you want to use an existing VPS just to test it out, see the docs here
[https://kubernetes.io/docs/setup/independent/create-
cluster-...](https://kubernetes.io/docs/setup/independent/create-cluster-
kubeadm/)

Once you have the cluster running, kompose[1] might be a nice tool if you're
used to using docker-compose, however I'd say just use it as a guideline -
you'll probably want to rewrite most of what it generates at one point or
another

[1]
[https://github.com/kubernetes/kompose](https://github.com/kubernetes/kompose)

~~~
PedroArvela
If you take the approach of a managed kubernetes instance, I really recommend
GKE, everything is taken care of for you.

On the other hand EKS offers what I'd call a "managed kubernetes master",
everything else is still pretty manual.

------
tekkk
I hope that the original title of the story was intended sarcasm: "Unbabel
migrated to Kubernetes and you won’t believe what happened next!"

But so they managed to consolidate their infrastructure around Kubernetes and
Google Cloud which made the management of their servers easier and faster? I
wonder how much actual money they saved but I guess it will pay off for them
in the long run.

I've been dabbling with Kubernetes for some time now but God forbid it can be
a _bit_ complicated. Time required to become well-versed with Kubernetes is a
hefty investment which is not for all organizations. Lots of small things that
can drive up your blood-pressure when figuring them out. Were it simpler I
would be much more inclined to be using it but now it's only in the "learning
for funsies" -category. I feel people who've developed k8s have been more of
the theoretical sort and not the regular-joe-dummy-kind like me.

~~~
falcolas
Saying that Kubernetes is a bit complicated seems like saying that water can
be a bit wet.

Even their documentation can't keep up. And with a release cycle of 3 months,
and a deprecation cycle of 6 months, you need a team dedicated to keeping up
with K8s state-of-the-art; so much of that knowledge you picked up a year ago
is at best stale, and at worst wrong.

Sure, it makes setting up and keeping a set of containers up simple. But
that's never really been that hard.

To paraphrase an article from a few weeks ago:

"We made microservices to address the problems with monoliths."

"We made containers to address the problems with microservices."

"We made Kubernetes to address the problems with containers."

~~~
eeZah7Ux
"Now we have a distributed monolith that requires 2x less developers to build
and 5x more system engineers to deploy"

~~~
falcolas
Well, to be fair, all of those developers probably found themselves filling a
systems engineer role "because the product developers are best equipped to
handle the running and support of their own applications".

------
TeeWEE
For me kubernetes is also a breeze. There is some learning curve because we
started with Helm, Tiller, Grafana, Prometheus right from the start. But the
kubectl command is easy to work with, and the k8s yaml files are really a
breeze of fresh air compared to Ansible playbooks.

We're not on production yet, but moving soon.

~~~
shawabawa3
> k8s yaml files are really a breeze of fresh air compared to Ansible
> playbooks

Hah. I'm a huge kubernetes fan but not sure I can agree here.

k8s yaml files are the most verbose and spammy things imaginable.

granted, ansible playbooks _can_ be horrific, but i'd say that's more down to
the authors of the playbook than ansible itself.

~~~
PedroArvela
This is definitely a point.

Kubernetes resource definitions are verbose, but you can expect them to always
be about that verbose and nothing else.

Ansible playbook instead really depend on the author, they can both be works
of art or abominations.

~~~
geerlingguy
I imagine as Kubernetes becomes more popular there will be a lot more of these
abominations present... similar thing has happened in popular programming
languages—as they are more widely adopted, early adopters who were more
focused on quality and correctness are fewer, and new devs who do 'all the
wrong things' are much more prevalent.

It's more of an issue with your organization's (or in some cases, personal)
process if you allow abysmal code to get checked into your codebase :) Even
Ansible has easy to integrate linting and testing tools.

------
zerogvt
It seems that k8s has won the deployment race by and large. I see a lot of
success stories around (I'm hearing nice things from the DevOps teams in my
organization as well). Yet I'm curious to hear a few cases where things did
not pan out quite right.

Note: The 5-15s DNS problem seems a pretty serious one. Weird that it didn't
get more publicity (and a proper fix).

~~~
zimbatm
There are a lot of things that can go wrong with K8s but there is always a way
to fix them. For example a common mistake it to forget to allocate limits on
pods, which then brings the worker node to capacity. I think the failure
scenario is soft, it's just going to cost more engineering time to figure out
how to upgrade the cluster to the new version, find out why this network
overlay isn't performing as expected or debug this external resource that
isn't being allocated properly, configure RBAC properly, play with various
resource deployment strategies, tune how pods are being moved during a node
auto-scaling event... The nice thing is that at the end it gives a unified API
for all of the things, it forces some consistency in the infrastructure.

My personal rule of thumb is that unless the client specifically need auto-
scaling or have more than 100 services to run, have a 5 people devops team,
just use Terraform.

For a small number of servers a better strategy is to have a base image with
Docker and monitoring, and use Terraform to deploy the infrastructure. CI can
then use docker-compose to deploy the containers onto the hosts directly. This
approach is much more stable and doesn't require to learn as many things as
K8s. This can be run by a 1 man DevOps team without a sweat.

------
geekuillaume
I'm working with Kubernetes recently and the learning curve is quite hard. I
hope the team will improve kubectl to make it more user-friendly (error
messages are hard to understand for beginners).

A lot of cloud providers now have a way to easily deploy and manage a k8s
cluster on their servers but I cannot find a tools that help with the
deployment of a basic service, something like dokku but on Kubernetes.

[http://dokku.viewdocs.io/dokku/](http://dokku.viewdocs.io/dokku/)

~~~
halbritt
Have you looked at helm?

------
beat
If you find Kubernetes a headache at first, consider looking at OpenShift.
It's Red Hat's wrapper for Kubernetes, and does make some things easier.

~~~
Carpetsmoker
I'm not sure if we've got enough wrappers yet. I think we want several more!

------
sulam
"The amount of instances multiplying was also clogging up our DevOps team."

Let's pause a moment and appreciate that if you have a DevOps team, you're not
doing DevOps.

------
Fishkins
Since they mention it a couple times in the article, how do other folks handle
auth for their k8s dashboards? I'm trying to figure out the best approach that
right now.

~~~
colek42
You could build an authorization proxy that creates a token with the Kube API
server and sets the Authorization header. This probably exists, but a project
I worked on: [https://github.com/boxboat/okta-
nginx](https://github.com/boxboat/okta-nginx) might be a good starting point.

------
OJFord
A story about a _migration to_ Kubernetes.

(As it is, I expected it to be about migrating k8s version. (Still much better
than OP though...))

------
user5994461
_Facepalm_. This entire blog post reads like they didn't figure out how to
deploy to more than one server with ansible, while ansible is made just for
that.

