
Convergence to Kubernetes - kiyanwang
https://medium.com/@pingles/convergence-to-kubernetes-137ffa7ea2bc
======
hardwaresofton
Can anyone who used Docker Swarm/Mesos/Nomad and then switched to Kubernetes
comment on anything that was done better by Swarm/Mesos/Nomad?

I invested in Kubernetes early and always meant to give the others a try (so I
could at least know the differences), but never got a chance to.

~~~
minieggs
I would love to see a comparison between Docker Swarm and Kubernetes. From
talking to peeps I've got that Kubernetes is better for a large number of
nodes? I self host all my side projects with Docker Swarm and it's been so
good I haven't needed to look into other container management solutions (but
I've only got eight nodes).

~~~
piva00
At the moment I'd say that Kubernetes is only worthy the effort if you have a
bunch of idle capacity in your nodes and some tens of machines at least.

The setup can get a bit complex quite early and won't be worth the effort to
manage 8 nodes, when you begin to scale to around 20 nodes running a bunch of
different workloads (batch jobs, web services, etc.), can avoid provisioning
on the application side, etc., then k8s begins to shine more and pay back the
investment.

~~~
brianwawok
Why should node count matter?

Two clicks to get 1-1000 nodes on GKE. The work is to learn the yaml syntax
and the way to deploy to GKE... but most apps need to learn something about
how it will be deployed (be it how to use ansible to deploy vs how to setup on
k8s vs how to use serverless). But you need to do this for 1-1000 nodes, may
as well just do it once...

------
ordinaryperson
Dumb question from someone who doesn't use Kubernetes (or Docker) in
production: don't routine security updates mean you're constantly rebuilding
and redeploying these images? And if so, how is that more efficient than just
using Puppet / Chef / Ansible and a 'real' server?

~~~
lkrubner
Docker is a dangerous gamble and you can get more of an automated build
system, with less devops effort, from Terraform and Packer. Avoid containers
and stick with real servers “baked” by Packer:

[http://www.smashcompany.com/technology/docker-is-a-
dangerous...](http://www.smashcompany.com/technology/docker-is-a-dangerous-
gamble-which-we-will-regret)

~~~
MPSimmons
There's no reason you can build containers in an automated, routine fashion
and use those to run your applications and services on. You don't have to run
containers like joeblow/randomservice - start with Alpine from the Alpine
maintainers (or CentOS or whatever) and write a custom dockerfile to build
your stuff.

------
101km
"The result was a system composed of many wavefronts of change: some systems
were automated with Puppet, some with Terraform, some used ECS and others used
straight EC2.

In 2012 we were proud to have an architecture that could evolve so frequently,
letting us experiment continually, discovering what worked and doing more of
it.

In 2017, however, we finally recognised that things had changed.

AWS is significantly more complex today than when we started using it. It
provides an incredible amount of choice and power but not without cost. Any
team that interacts with EC2 today must now navigate decisions on VPCs ,
networking and many, many more."

Of course, _this time_ , it is different.

~~~
mmt
I do certainly wonder if the ever-increasing levels of complexity in the
layers of abstraction will backfire in some way soon.

It seems the trend has accelerated recently.

~~~
bonesss
Luckily there's an easy fix for that: adding more layers abstraction :D

~~~
mmt
I'll bet you have an "easy" fix for Social Security, too :)

Joking aside, I certainly understand the benefits of abstraction. As someone
always points out in any discussion about ORMs, for example, abstractions are
leaky. Whenever one has to learn about the inner workings of what the
abstraction is hiding, some of that ease evaporates.

------
polskibus
Is there a recommended way to handle database migrations in kubernetes? Is
there a best practice or a tool for that?

~~~
MPSimmons
What do you mean when you say 'database migration'?

In my mind, there could be a few things:

1) Migrating the database from server A to server B where the server is on
Kubernetes

A1) Don't do this. Don't run a (traditional) database server in Kubernetes.
Sure, you can do this - there are all kinds of volume support for all kinds of
things. Everything I've ever heard and read has told me that containers aren't
a good for for this type of long-term service that gets refreshed
infrequently.

2) Migrating the database from server A to server B where you have to tell all
of the clients what the database is

A2) This should probably be done via service discovery or even just by a short
TTL on a CNAME in DNS.

3) Something else?

~~~
subway
Database migrations typically refer to schema/ddl changes to an application
along side deployment of a new version.

 _Everything I 've ever heard and read has told me that containers aren't a
good for for this type of long-term service that gets refreshed infrequently._

That was a safe rule of thumb 5 years ago. Since then support for scheduling a
persistent volume to be available along side your long running container has
become a bog standard and boring feature.

------
eecc
the steady-state architecture reminds me of this hn story about the Lava Layer
anti-pattern

[https://news.ycombinator.com/item?id=8772641](https://news.ycombinator.com/item?id=8772641)

------
ben0x539
Heads up, this article is from the future, apparently:

> In late 2017 all teams ran all their own AWS infrastructure. [...] In a
> little over a year that’s changed for all teams.

------
ajross
> We have close to 30 teams that run some or all of their workloads on our
> clusters. Approximately 70% of all HTTP traffic we serve is generated from
> applications within our Kubernetes clusters.

Sounds big. But then per wikipedia:

> uSwitch.com [...] allows consumers to compare prices for a range of energy,
> personal finance, insurance and communications services.

And:

> On 30 April 2015, the property website firm Zoopla agreed to purchase
> uSwitch from LDC for £160 million

So... a low bandwidth business (we're hardly talking Netflix here!) doing
maybe, what, $10M in revenue annually and not growing fast enough to justify
venture investment or IPO funding (they were a private acquisition!)...

Seriously, I'm sure they like it. But do they really, truly need Kubernetes?
This really sounds like the kind of scale that can be achieved with 2-3 hand-
managed servers, or maybe twice that number of AWS boxen.

~~~
marcc
Should any business that has SLAs and needs to be reliable ever rely on “2-3
hand-managed servers”? No.

Kubernetes isn’t only about scale. It also provides rolling upgrades and
rollbacks. And failover. And DNS based service discovery. And there’s more.
You can find solutions to these without Kubernetes but a lot of Kubernetes use
is to get these, not simply for scale issues.

~~~
bonesss
> And there’s more

Being able to reliably create your entire infrastructure on another platform
in 'minutes', for example. Not to mention applications architected from the
ground up around cloud-friendly and scale-friendly primitives...

2-3 hand-managed servers are great, but will absolutely warp your application
and will slowly accrue configuration cruft. That's not terrible, but for many
Real World issues portability and freedom to fire up wholly valid test-
environments are game-changers. Even the acquisition stories are nicer.

