
Kubernetes Is in Hospice - fanf2
https://www.linkedin.com/pulse/kubernetes-hospice-ian-eyberg
======
cfors
Yes, Kubernetes is a surefire way to shoot yourself in the foot. To be able to
use it effectively, you essentially need a dedicated team that knows the ins
and outs of a rather complex API as well as deep networking, Linux and
distributed system knowledge.

However this article takes a bunch of pot shots at Kubernetes that I don't
necessarily think are correct. First off, not everyone needs Kubernetes [0].

Second, migrating a company with as much software as Google off of Borg is not
a trivial process. They even failed before by trying to create its supposed
successor, Omega.

Third, the argument about "open-sourceable" is a strawman argument that
doesn't make sense.

As for the attack on container security, I don't understand the argument that
was necessarily made there but at this point I'm not sure it deserved a
rebuttal because all software is inherently insecure.

> You are absolutely deluded, if not stupid, if you think that a worldwide
> collection of software engineers who can't write operating systems or
> applications without security holes, can then turn around and suddenly write
> virtualization layers without security holes.

\- Theo De Raadt (NetBSD founder)

At the end of the day, yes Kubernetes is complicated but it is not for a
company running 25 VPS servers. It's for companies to replace legacy,
expensive datacenter virtualization they have paid a lot of money in licensing
fees for.

[0] [https://blog.jessfraz.com/post/you-might-not-
need-k8s/](https://blog.jessfraz.com/post/you-might-not-need-k8s/)

~~~
merb
actually kubernetes also works for the small scale. if you have more than 1
server you either need one of the following or combine it:

\- automation \- configuration management \- scheduling \- load balancing \-
provisioning

you can than either built it yourself, probably with the help of
ansible/puppet,... or you can just use kubespray+k8s or any other "managed"
k8s thingy.

of course if you try to manage more than 1 server you at least need some
understanding of networking (but probably not that deep than you think),
understanding of namepsaces or virtualisation (unless you think it's a good
idea to run 1000 tools on one host without at least limit their cpu/memory/io
usage) and you probably need knowledge of linux and also at least a little bit
of knowing how distributed leader election might work. But no matter what, if
you run more than one system you need to have somebody that is capable of
that.

~~~
_dps
I like k8s but this is way too optimistic.

I regularly run long-standing 20+ instance clusters using boto, pip, and a few
hundred lines of python scripts. People have been doing some version of this
for ~15 years.

K8s adds a lot of conceptual overhead on top of "just write a sysadmin
script". It is absolutely not necessary to have an orchestration system as
soon as you're past one instance.

~~~
merb
It depends, of Course on aws stuff is a little Bit easier, especially zero
downtime deployments. But thats not the case in on premise Environments

------
neetdeth
Twitter screenshots? _" Container celebrities"_? What the hell is this?

It sounds like a bunch of people who based their technology decisions on
social media buzz instead of well-defined requirements are now running into
operational problems and looking for the next trend to chase.

------
robbyt
CEO of a company that makes a competitive technology uses hyperbole to express
why he thinks kubernetes is bad.

...And since when is LinkedIn a blogging platform??

------
tracker1
For me, it's primarily a relatively simple way to utilize a few servers to
scale test/qc deployments for applications. We're developing applications that
get deployed with different configurations for differing clients. Getting that
variety deployed in one-off configurations has been painful to say the least,
and testing has been difficult.

Currently setting up a pilot for one smaller app to get deployed for each
configuration profile to a small cluster. This will allow for better test
targeting for integration tests as well as QA and Demo environments.

If it works, we may adapt the tooling for client deployments and application
maintenance.

It's not a matter of being the best tool, it seems to be a good enough tool
with enough weight behind it. It's not deployed publicly and we aren't sharing
hosts with untrusted third parties.

------
JMTQp8lwXL
So, Kubernetes has bugs. The author has pointed that out. But is the
fundamental architecture or concepts of Kubernetes wrong? I am not convinced
by this article. All technology has bugs. I've been putting simple dockerized
NodeJS servers in Kubernetes and it works great for me. That isn't the most
complex use case of course, but for folks with simpler use cases (e.g.,
startups), I find it to be a great fit.

------
brew-hacker
To be honest... This article sounds like someone who has lost a bet on
Kubernetes and is trying to just flame the technology. If the technology is
truly doomed then I would accept that. Unfortunately, I would have to disagree
on all accounts... Kubernetes allows folks to actually not have to worry about
developing their own scheduler and abstractions that would be necessary
anyways to incorporate distributed systems in their own environments. Now if
there were problems around security or bugs in Kubernetes proper -- I would
absolutely advise you to be the engineer that you are and contribute back!
Instead of trolling Kubernetes I would recommend solving the problems that
others can actually benefit from.

------
lazyant
> Google's "containers" are most definitely not "docker containers".

erm what now?

~~~
habitue
They're using the same underlying kernel features like namespaces and cgroups,
but the docker spec is a very particular choice for how to specify a
container, and Google was doing containerization long before docker existed,
so they are undoubtedly not using the exact same formulation of containers in
borg

~~~
lazyant
When you use GKE the containers are CoreOS by default, hence my confusion

------
joeblow9999
you rarely need to build your own container and that's the only reason you
might need kubernetes.

otherwise all the cloud platforms offer simpke robust reliable auto-
containerization for you. aws elastic beanstalk is one example. openshuft will
even do ut for you using kubernetes under the covers. pivotal cloud foundry
has their own non-kubernetes non-docker approach and it works great.

