
Is K8s Too Complicated? ￼ - signa11
http://jmoiron.net/blog/is-k8s-too-complicated/
======
hacknat
Right tool for the right job. Is K8s too complicated? For some use cases it
is.

They probably should do a better job of discouraging certain use cases, but
calling their elevator pitch “bullshit” is hyperbolic.

There are exceptions to every rule, but a good rule of thumb is cluster size.
If you’re managing less than 25 servers than K8s is _probably_ over kill. As
you start to creep north of 40 servers K8s really starts to shine. The other
place K8s really shines is dynamic load. I manage anywhere from 600-1000 16Gb
VMs, I can’t imagine doing it without K8s.

If cluster size isn’t a good rule of thumb then application architectur
probably is. If no one person in you company has a complete mental model of
the application architecture or it is impossible because it is so complex,
again container orchestration might be a good way to go.

Final point: If you’re struggling with K8s swallow your pride and buy a
managed solution, then learn as you go.

~~~
pstadler
Even though I fully agree with you, running a little 3 node cluster just for
fun is amazing. Thanks to Rook and an Nginx ingress controller with kube-lego,
I’m able to deploy applications leveraging distributed storage and getting tls
secured endpoints without a single ssh session. This, in my point of view, is
absolutely powerful.

Shameless plug, I‘ve been working on a project explaining how to run small
scale clusters in the cloud for more than a year: [https://github.com/hobby-
kube/guide](https://github.com/hobby-kube/guide)

~~~
ad_hominem
You may want to update your tutorial to use cert-manager because kube-lego is
in maintenance-only mode

~~~
Filligree
You mean, it's stable. That sounds like a good reason to use it, not to avoid
it.

~~~
ad_hominem
Not really; I had an issue that silently broke renewals. When I found the open
issue that corresponded to it the maintainers were herding people to cert-
manager. There are a lot of issues where they are doing that (besides very
obviously at the top of the README with warning symbols).

At the very least it's eventually going to break on newer versions of
Kubernetes. Maintenance-only != LTS.

~~~
eric_h
Maintenance-only usually means deprecated. There are some areas in the
software world where maintenance-only can last a long time and can be de facto
LTS. The Kubernetes ecosystem is not one of them.

------
tarr11
If you follow that link on Twitter, there are a lot of reasonable answers to
his (rhetorical?) question - "In a single tweet — can you name a technical
benefit you and your team have gained by switching to Kubernetes?"

"Sensible configuration, fast deployments, awesome community, and flexible
control plane...among others"

"single root of truth for configuration"

"predictable deploys"

"Standardized orchestration, which makes talent easier to find."

"the capability to deploy on more than one platform, more than one cloud."

"Not called in at midnight when an entire node segment went down due to
hardware failure."

"The ability to deploy hundreds of services within minutes."

~~~
matthewmacleod
I would argue that those were benefits of switching to _any_ containerised
infrastructure with an orchestrator, though. You could probably obtain most of
those benefits with Nomad or Mesos.

~~~
rco8786
Maybe. But the question wasn’t about the advantages of k8s over other
containerization systems.

~~~
matthewmacleod
I know, you’re right of course.

It’s kinda like all those “we rewrote our Ruby app in Go and it’s 1000x faster
so Go is awesome” articles. Containerisation is good for those things, but
it’s still possible that Kubernetes is an excessively complex solution to the
problem!

~~~
rco8786
Fair enough. To your original comment, I was part of a large transition from
bare metal servers and custom rolled CI, deployment, monitoring, etc over to
using Mesos and the end result was infinitely more pleasurable and productive
to work with as an engineer.

------
anoncoward1234
Ok, I'll bite.

I posted this in another thread, but check this out:
[https://stackoverflow.com/questions/50195896/how-do-i-get-
on...](https://stackoverflow.com/questions/50195896/how-do-i-get-one-pod-to-
network-to-another-pod-in-kubernetes-simple). That's the amount of crap I
waded through trying to rubber-ducky myself into figuring out how to get two
pods to talk to each other. In the end, I copied a solution my friend had
gotten, and it's still not great. I'd love to be able to use Ingress or Calico
or Fabric or __something __to get path routing to work in Kubernetes, but
unfortunately all the examples I 've seen online suffer from too much
specificity. Which is the Kubernetes problem - it can do everything so trying
to get it to do the _one_ thing you want is hard.

Here is the kubernetes cluster I ended up building. If anyone has any ideas on
how to add path routing let me know -
[https://github.com/patientplatypus/KubernetesMultiPodCommuni...](https://github.com/patientplatypus/KubernetesMultiPodCommunication)

~~~
jchw
I think part of the problem is that I can't immediately understand what is
actually being done. You say you want a per se React frontend to talk to a
Node.js backend. But that's not really a pod-to-pod communication issue; both
frontend and backend will be communicating with the user's browser, outside
the cluster.

Secondly, you deployed an Nginx ingress controller. You don't need to deploy
more than one of these in your whole cluster, so you can go ahead and separate
this from your program's deployment manifests. Typically, cluster add-ons are
installed by running kubectl -f with a URL to a GitHub raw URL, or, if you
want to be much cleaner, using Helm (basically, a package manager. It installs
with one command and then you can use it to install things into your
Kubernetes cluster easily, such as an Nginx ingress controller.)

If you're wondering why the process is such a mess, it's probably just because
Ingress is still new. In the future, support for more environments will
probably come by default, without needing to install third party controllers.
Already, in Google Cloud, GKE clusters come ready with an Ingress Controller
that creates a Google Cloud load balancer.

As a side note, I found that the nginx ingress controller was not working by
default in my cluster. I noticed some errors in the logs and had to change a
parameter with Helm. Don't recall what it was, unfortunately.

~~~
anoncoward1234
The problem with adding the Ingress controller via Helm (and with a lot of
other Kubernetes abstractions) is that it spits out _a lot_ of code that is
then difficult or impossible to reason about. `Helm Ingress
--whateversyntaxdefualt` spits out 1000+ lines of Ingress controller code that
is essentially two deployments with a health check and auto spin up, but it's
complicated. In production can I use this or is there a security hole in
there? What if the ports the health check are using overlap with other ports I
have assigned somewhere else? What if something equally silly?

Maybe Kubernetes is new so that's why it's so wild west, but it really feels
like a pile of bandaids right now.

~~~
jchw
I have read through the nginx ingress controller code in Helm before deploying
it into production.

What you're saying is pretty much the result of my biggest gripe with
Kubernetes, though it's one I don't have a lot of ideas of how to fix; there's
too much damn boilerplate. 1000 lines of YAML to store maybe 100 relevant
lines.

That being said, can you trust that there is not a security vulnerability when
you deploy i.e. NGINX alone? Your answer should not be yes. Even if you read
through every single line of configuration and understand it, it doesn't mean
something isn't wrong. Google "nginx php vulnerability" for an example of what
I mean; innocent, simple configuration was wrong.

I read the Helm chart for nginx ingress because I wanted to understand what it
was doing. But did I _have_ to? Not really. I trust that the Helm charts
stable folder is going to contain an application that roughly works as
described, and that I can simply pass configuration in. If I want to be very
secure, I'm going to have to dig way, way deeper than just the Kubernetes
manifests, unfortunately. There's got to be some code configuring Nginx in the
background, and that's not even part of the Helm chart.

~~~
markbnj
> What you're saying is pretty much the result of my biggest gripe with
> Kubernetes, though it's one I don't have a lot of ideas of how to fix;
> there's too much damn boilerplate. 1000 lines of YAML to store maybe 100
> relevant lines.

I think that's more a helm issue than a k8s issue. I've been using helm in
production for over a year and k8s for almost three years. Prior to adopting
helm we rolled our own yaml templates and had scripts to update them with
deploy-time values. We wanted to get on the "standard k8s package manager"
train so we moved everything to helm. As a template engine it's just fine:
takes values and sticks them in the right places, which is obv not rocket
science. The issues come from its attempt to be a "package manager" and
provide stable charts that you can just download and install and hey presto
you have a thing. As a contributor to the stable chart repo I get the idea,
but in practice what you end up doing is replacing a simple declarative config
with tons of conditionally rendered yaml, plug-in snippets and really horrible
naming, all of which is intended to provide an api to that original, fairly
simple declarative config. Add to that the statefulness of tiller and having
to adopt and manage a whole new abstraction in the form of "releases." At this
point I'm longing to go back to a simpler system that just lets us manage our
templates, and may try ksonnet at some point soon.

~~~
AndyNemmity
The stable chart thing is so weird. Internally use we some abstractions, but I
looks at stable charts and it requires so much time just to understand all of
what's going on. Everything is a variable pointed to values, and you can't
reason about any of it.

It seems like the hope is, just ignore it all, and the docs are good, and just
follow them, but I don't live in any kind of world I can do that.

And the commits, and the direction of all of them seem to go more and more
impossible to read conditionally rendered symbols.

I've had such a challenge understanding and using helm well enough. Small
gotchas everywhere that can just eat up tons of time. This doesn't feel like
the end state to me.

~~~
markbnj
> It seems like the hope is, just ignore it all, and the docs are good, and
> just follow them, but I don't live in any kind of world I can do that.

Yep, agreed, we've used very few charts from stable, and in some cases where
we have we needed to fork and change them, which is its own special form of
suck. The one I contributed was relatively straightforward: a deployment,
service and a configMap to parameterize and mount the conf file in the
container at start. Even so I found it a challenge to structure the yaml in
such a way that the configuration could expose the full flexibility of the
binary, and in the end I didn't come anywhere near that goal. You take
something like a chart for elasticsearch or redis and its just so much more
complicated than that.

~~~
AndyNemmity
Right, I'm in particular working on charts for ELK, and it's just a mess. I
just took down all my data (in staging, so all good) due to a PVC. The charts
won't update without deleting them when particular parts of the chart change,
but if you delete them, you lose your PVC data.

So I find the note in an issue somewhere stating, this is.. intentional?.. and
that of course you need some annotation that will change it.

Let alone the number of things like, xpack, plugins, the fact that java caches
the DNS so endpoints don't work on logstash, on and on.

It seems like everyone is saying operators are going to be the magical way to
solve this, but if anything it seems like one set of codified values, that
don't address any of the complexity.

~~~
markbnj
You're using a statefulset? Here's a tip: you can delete a statefulset without
deleting the pods with `kubectl delete statefulset mystatefulset
--cascade=false`. The pods will remain running, but will no longer be managed
by a controller. You can then alter and recreate the statefulset and as long
as the selector still selects those pods the new statefulset will adopt them.
If you then need to update the pods you can delete them one at a time without
disturbing the persistent volume claims, and the controller will recreate
them.

------
matthewmacleod
I suppose we have to assume that we should use the right tool for the right
job, and all that. And I'm sure that the Kubernetes folk know what they're
doing. But I definitely think that it's too complicated without a cutting-edge
Kubernetes expert in place to manage it. And even then, it's just a _building
block_ for a larger system.

I've tried maybe half a dozen times to get started for relatively small
workloads – let's say between 5 to 50 servers. There seems to be a lack of any
good documentation on how to get started – sure, I can spin up a couple of
pods on a cluster or whatever, but then actually taking that to an easy-to-
deploy, load-balanced, publicly-accessible infrastructure seems to be much
harder.

It's maybe a lack of some kind of official, blessed version of a "this is how
you should build a modern infrastructure from container to deployment" guide.
There are so many different moving parts that it's hard to figure out what the
optimal strategy is, and I've not found much to make it easier.

Maybe there's actually a niche for a nice and simple "infrastructure in a box"
kind of product. I'd love something like Heroku that I could run on my own
bare metal, or on AWS – the various solutions I've tried were all lacking.

In the end I've fallen back to using the Hashicorp stack of Nomad and Consul.
This seems to work in a much simpler way that I can actually wrap my brain
around – I get a nice cluster running balanced across machines, deploying and
scaling my applications as required, and it was super easy to set up with good
documentation.

I've been nervous about this approach because of how Kubernetes-mad the
industry seems to have gone – so maybe it'll be worth another look when there
are some more comprehensive solutions in place that make it easier to get
started!

~~~
pm90
I’ve worked as a developer on 2 major managed k8s providers. Using k8s is easy
but operating it is not. The biggest reason is that every cloud provider is
using different underlying technologies to deploy k8s clusters. See for
example all the different kinds of ingress controllers.

~~~
geerlingguy
And it doesn’t help that docs sometimes mention how you do things assuming
you’re using Google Cloud, other times with examples for Google, AWS, Azure,
or bare metal, other times minikube or some other smaller scale wrapper, and
other times it’s not clear what kind of platform the commands should ‘just
work’ on as-written.

I had a ‘fun’ time figuring out the basics of things like ingress controllers
in a bare metal setup because of this mishmash of docs.

------
sytse
The OP linked to a set tweets from Joe Beda (author of Kubernetes) that read
"When you create a complex deployment system with Jenkins, Bash,
Puppet/Chef/Salt/Ansible, AWS, Terraform, etc. you end up with a unique brand
of complexity that _you_ are comfortable with. It grew organically so it
doesn't feel complex."

Kubernetes is a convention over configuration framework for operations. And by
doing that it allows you to easily build on its abstractions.

At GitLab the Kubernetes abstractions allowed is to make Auto DevOps that
automatically builds, runs tests, checks performance, diagnoses security, and
provisions a review app. We could have made that with gitLab CI and bash but
it would be much harder for people to understand, maintain, and extend.

Having an abstraction for a deployment is something that doesn't even come by
default with Jenkins as far as I know. Now that we have that as an industry we
can finally build the next step of tools with the usability of Heroku but the
control, price, flexibility, and extensibility of self-hosted software.

------
cygned
I started with k8s beginning this year and from my point of view, the
documentation is not good - and a major pain point when trying to get started.

Each part on its own is good and well written, but it lacks the overall
picture and does not connect pieces well enough. For example, the schema
definitions for the all the configuration files are not linked from the
official docs (at least I wasn’t able to find them). The description of how to
get started with an On Premise setup is scattered over multiple pages from
multiple tools.

So from my point of view, things could be improved for beginners.

~~~
tomsthumb
The kubernetes up and running book gives a better big picture view than the
documentation online does. It does a thorough job of developing motivation and
context for using a broad swath of the system in a cohesive way.

~~~
cygned
I actually own that book and I definitely recommend it!

------
stapled_socks
Having worked with Docker Swarm previously - Kubernetes seems like they added
5 layers of abstractions and at the same time made it feel more low-level.

I think Kubernetes has all the building blocks to be great but desperately
needs a simplified flow/model and UI for developers who just want to run their
apps.

~~~
haolez
I prefer to work with Swarm as well, but I bump into stability issues now and
then. It’s probably not on par with Kubernetes

------
KaiserPro
K8s is a beast, and it has a fairly specific workflow, which for most people
is not a good fit.

For example you need to understand that certain workloads dont fit well on the
same box (DBs and anything IO/memory sensitive for example)

Then there is the "default" network setup where each node is _statically_
assigned a /24\. (because macvtap + dhcp is "unreliable" and inelastic
apparently.)

Now, I've heard a lot of talk about you either use k8s or ssh to manage a
fleet of machines. That pretty much a wrong comparison.

K8s provides two things a mechanism for shared state (ie. I am service x and I
can be found on ips y & z) and a scheduler that places containers on hosts
(and manages health checks.)

If your setup has a simple config scheme, (using a simple shared state
mechanism, like a DB, or a filesystem, or DNS) or you have no issues with
creating highly automated deployments using tools like CF, chef, anisble
cloudformation $other, then k8s has vanishing returns (it sure as hell doesn't
scale to the 50,000 node count, because its so chatty. )

Basically its a poor man's mainframe, where all the guarantees of a process
running correctly regardless of what is happening to the hardware has been
taken away.

~~~
rossdavidh
Wow. I guess I am not the first person to look at K8s and think, "they re-
invented the mainframe".

------
noonespecial
The best tools work at multiple levels and operator skills. Imagine a tool
that can do everything from pound in a nail to launch a spacecraft. In an
ideal situation, a carpenter can grab it and pound some nails and a rocket
scientist can launch a Mars mission with it.

In the worst case, the tool forces you to learn how to launch spacecraft
before you can pound nails.

In my limited experience (having twice now made attempts to get up to speed
and actually accomplish something with K8s) it felt an awful lot like learning
rocketry to pound nails. I may have had an entirely different experience if I
had come looking to do a moon shot.

------
dsnuh
I feel like managed Kubernetes solutions are a completely different beast from
on-prem physical deployments, and that when people talk about how Kubernetes
isn't complicated, they are probably running a webapp on GKE or Minikube on a
DO droplet, or something similar.

Running Kubernetes on your own private, on-prem infrastructure - integrating
services that live outside the cluster, exposing your cluster services,
rolling your own storage providers, adding a non-supported LoadBalancer
provider, managing the networking policies, etc., etc. - can quickly become an
incredibly messy and complex endeavor.

------
013a
"In a single tweet — can you name a technical benefit you and your team have
gained by switching to Kubernetes?"

"The missing step in a comprehensive OSS devops strategy, from code (git)
build (Docker) test (Docker) to deploy (Kube), which enables push-button CICD
like no one has gotten right since Heroku."

~~~
barnabee
If they'd got it as right as Heroku then deployment (or CD configuration)
would be a one liner once a server is set up and scaling etc. would be
configurable via a very simple web console or CLI.

Unless I missed something, last time we tried k8s (admittedly ~6 months ago)
it was woefully far from that goal.

~~~
bavell
Although it didn't come out of the box, my deploys are now one-liners. The
command builds, tests and packages my app into a docker image, then bumps the
version number on the k8s deployment which triggers a rolling update of
production pods. I'm very happy with my current deployment flow, mostly
because it works reliability and I can forget about what it's doing under the
hood.

------
lalp1
The hidden cost of K8S is you have to shift a lot of things: configuration
management, secrets, CI/CD, monitoringn users' habits...

But it's worth it :)

------
perlgeek
Suppose I'm an application developer who is only interested in infrastructure
because there needs to be some to run my stuff.

In this scenario, would I actually learn Kubernetes, or would it make more
sense to go to straight to a PaaS solution? Like OpenShift (which uses k8s in
the backend, I believe), or Cloud Foundry, Stackato etc.

I always get the impression that k8s has a lot of good ideas, but doesn't
provide everything out-of-the-box for actually deploying complex application.
How true is that?

~~~
jacques_chester
My advice is to go directly to a PaaS. I work for Pivotal R&D in and around
Cloud Foundry, so that's my personal horse. But I'd rather that you used _any_
PaaS -- Cloud Foundry, OpenShift, Rancher -- than roll your own.

Building a platform is _hard_. It's really really hard. Kubernetes
commoditises _some_ of the hard bits. The community around it will
progressively commoditise other aspects in time.

But PaaSes already exist, already work and either already base themselves on
Kubernetes or have a roadmap to doing so.

To repeat myself: building PaaSes is hard. Hard hard hard. Collectively,
Pivotal, IBM, SAP and SUSE have allocated hundreds of engineers in dozens of
teams to work on Cloud Foundry. We've been at it non-stop for nearly 5 years.
Pivotal spends quite literally _millions of dollars per year_ testing the
everliving daylights out of every part of it [0][1]. (Shout out to Concourse
here)

I fully expect Red Hat can say the same for OpenShift.

Really. Use a PaaS.

[0] [https://content.pivotal.io/blog/250k-containers-in-
productio...](https://content.pivotal.io/blog/250k-containers-in-production-a-
real-test-for-the-real-world)

[1] [https://content.pivotal.io/blog/you-deserve-a-
continuously-i...](https://content.pivotal.io/blog/you-deserve-a-continuously-
integrated-platform-here-s-why-it-matters)

~~~
ec109685
Building PaaS abstractions on top of Kubernetes is an order of magnitude
easier than doing what you guys did with Cloud Foundry. Building something
that can scale to 250k Containers is monumentally hard, but with K8s, it is
taken care of for you: [https://kubernetes.io/docs/admin/cluster-
large/](https://kubernetes.io/docs/admin/cluster-large/)

If you are a large enough organization, it is quite feasible to setup
Kubernetes, chose an ingress solution and then build templated configurations
that generate K8s yaml flies and run your deployments with Jenkins. I am not
saying it is easy, but you don’t need any expertise with bin packing
algorithms and control loops, and really is in the sweet spot of “devops”
engineers.

~~~
jacques_chester
> _Building PaaS abstractions on top of Kubernetes is an order of magnitude
> easier than doing what you guys did with Cloud Foundry_

It's worth noting Kubernetes didn't exist when Cloud Foundry started. Neither
did Docker. The reason Cloud Foundry built two generations of container
orchestration technology (DEA/Warden and Diego/Garden) was because it was
partly inspired by direct experiences of Borg, as Kubernetes was. Folks had
seen the future and decided to introduce everyone else to it.

The point here is not whether sufficiently large organisations _are able_ to
build their own PaaSes. They absolutely can. Pivotal's customer roster is full
of companies whose engineering organisations absolutely dwarf our own.

The question is: _should_ you build your own? This is not a new question.
Should I build my own OS? My own language? My own database? My own web
framework? My own network protocol? My own logging system? My own ORM?

The general answer is: no, not really. It's not the most effective use of your
time, even if it's something you'd be perfectly able to achieve.

~~~
ec109685
I know Kubernetes wasn’t around when Cloud Foundry was started. That wasn’t my
point. Some of your argument was that building Cloud Foundry was hard (and I
agree!), therefore you need a vendor’s PaaS. That isn’t true.

If an engineering organization takes Kubernetes and adds their own tooling
around it to turn it into a PaaS for their org, that isn’t in the same league
as building their own Database or what you did with Cloud Foundry originally.

~~~
jacques_chester
I brought up the history as a lot of people don't know it.

I think we've made our respective points.

And hey, for folks building tooling around Kubernetes, I'm pretty much obliged
to mention PKS and Concourse :)

------
memeis
Couldn't agree more with - "Like a lot of other tech that has ostensibly come
out of google, it will likely have at least one major source of complexity
that 95% of people do not need and will not want. "

We really need to understand our applications, their architecture, and then
the right way to build and run them. I've talked with many people focused on
k8s and losing sight of what they are trying to build and why.

------
Animats
If you can't even spell out the name, it's too complicated.

------
bradhe
I’ve been a Mesos/Marathon user for the past few years. I love it for its
simplicity. Install the agent, point it at Zookeeper, and you’re done.

I’ve got a catalog of Puppet manifests for managing the cluster that I hope to
publish one day soon.

------
tejohnso
"I suspect that a significant source of programmers flooding into management
is the decreasing wavelength of full scale re-education on how to accomplish
the exact same thing slightly differently with diminshing returns on tangible
benefits."

Wow. I'm curious to see how others are interpreting this sentence.

I've got:

I suspect many developers opt to move on to management largely because of
frustration with having to learn new abstractions with increasing frequency
and decreasing benefit.

~~~
NTDF9
I agree. Most of software engineering has turned into resume driven
development, and engineers can only keep up at it for some time.

Kubernetes did not have to be so complicated. Docker swarm just proved it. If
it takes 10 engineers to really program and manage your kubernetes instances,
what problem did it solve again?

------
jchw
>In a single tweet — can you name a technical benefit you and your team have
gained by switching to Kubernetes?

\- Continuous delivery made easier

\- Code-as-infra you can deploy anywhere (Pi, GCP, AWS...)

\- Ability to pack VMs tighter, viewing your cluster as a pool of resources.

Kubernetes may seem too complicated if you miss the point. It's throwing the
baby out with the bathwater, but it's doing so with purpose. Kubernetes didn't
become popular by accident.

The benefit is hard to explain for the same reason that it's hard to learn:
it's a complicated piece of tool that solves a problem most people don't
understand. That problem is scheduling. Most people view systems
administration as the practice of managing machines that run programs. Google
flipped the script: they began managing programs that run on clusters,
probably about as early as any other company, if not earlier.

The key insight here is that with Kubernetes, you are free from the days of
SSHing into machines, apt-get installing some random things, git cloning
stuff, and setting up some git hook for deployments. No matter how much more
advanced your process is, whether you have God's gift of a Chef script, or you
have the greatest Terraform setup in the world, you're still managing boxes
first, and applications second, in the traditional model. You have to repeat
the same song and dance every time.

To be fair, Kubernetes is not the only platform to provide this sort of
freedom. Obviously, it's based on Google's famous Borg, and Docker Swarm also
exists in this realm, as well as Apache Mesos. I think Kubernetes is winning
because it picked the right abstractions and the right features to be part of
itself. Docker Swarm did not care enough about the networking issues that came
with clustering until recently. Specifically, one of the first problems
becomes "What if I need multiple applications that need to be exposed on port
80?" _Kubernetes IMMEDIATELY decided that networking was important, providing
pods and services with their own IP addresses first thing, including
LoadBalancer support early on._ In my opinion, pods and services are the sole
reason why Kubernetes crushed everything else, and now that other solutions
are catching up and implementing better abstractions, and other solutions for
networking are appearing, the problem now is the massive headstart Kubernetes
had. Kubernetes let you forget about managing ports the same way you forget
about managing machines. Docker Swarm wasn't offering that.

Yeah, it took a few paragraphs to explain why it makes sense, but once you
"get" it, it's hard to unget.

That does not mean that Kubernetes is not too complicated. It's probably way
too complicated for most of us, and a lighter solution with similar properties
would be fine. But that doesn't mean the complexity is all a waste; it's just
not useful to all of us.

------
nimbius
disclosure: im an engine mechanic with a lot of interest in Linux as a hobby.

Kubernetes seems like a great alternative to stuff like openstack that seems
like it requires an entire datacenter to get going properly, but I feel like
the hype (k8s? really?) is outliving the reality.

youre also bucking up against a problem where on smaller scales, it just seems
easier to use something else. Maybe not "agile" and all that nonsense but
certainly easier.

~~~
fear91
OpenStack is used mostly for the layer beneath Kubernetes. We use it to
provision the VM's/networking for the Kubernetes cluster itself.

Kubernetes might not make sense for your small scale projects but it's great
when used for enterprise scale microservices. It makes deployments easier,
faster and more secure (with things like network policies, namespacing and
RBAC).

~~~
AndyNemmity
We run Kubernetes under Openstack, which can deploy hosted Kubernetes. Lots of
variations.

------
shinzui
It depends on your context. Having moved from ECS on AWS to K8s on GCP I found
kubernetes simpler despite its vast number of concepts. It has the correct
abstractions for a microservice architecture, and it made us write less
boilerplate code to manage and deploy services.

------
lowbloodsugar
Too complicated for whom?

We just gave up on Mesos. It has been impossible to hire a team to support
Mesos, which in many ways, is simpler than k8s while delivering greater
scale[1]. But it's been me and one other guy building the whole thing when we
planned on having ten. We're planning on replacing it with something else, and
New Guy wants that something else to be k8s. Or wanted. Then we spent an hour
or so asking "So, how will you handle failure mode (that happened) X?" or
"What about the teams that need Y?" and no ECS is looking real good to this
guy.

At the literal end of the day, when a bug causes masters to lose quorum at
2am, do you want to be fixing that, or do you want an AWS or Azure team to be
fixing that. When you have a hundred apps happily running but the coordination
framework is down, so that if you lose and agent, or get lots of traffic, the
system that would add or replace capacity is _down_ , how fun is that? Well,
we didn't have an outage, because all the apps were up and running, but thank
goodness we didn't lose an agent of get a spike in traffic. You live in
constant terror, because it never gets so bad (site down), but you have lots
of near death experiences where you race against time to fix whatever fun way
the system failed. God forbid you upgraded to the release (of Marathon) that
has a failure mode of "shut everything down" ( _we_ didn't, because I'm
"overly cautious", but others did).

Now for some companies, where they have the need for a huge cluster, and the
cash and reputation to be able to hire for it, then Mesos or k8s seem like a
great idea. But for everyone else, use ECS and have a team at AWS keep that
thing running.

[1] I suspect it's probably easier to hire people for k8s, and at the same
time more difficult to hire people who can actually do the work. k8s looks
simpler than mesos, but is far more complicated. Mesos has a smaller
following, but those I've met know what they're doing and respect the problem.
And Mesos didn't get everything right, but it's "righter" than k8s IMHO.
There's a need for a Mesos 2. =)

------
sandGorgon
K8s comes with the inherent dependence that you are using a cloud infra with a
lot of the setup done right.

Most importantly - ingress. Getting your ingress on k8s is still a big issue.

I'm not sure about all of these statements that networking in k8s is superior
to swarm. For the longest time, it was a huge mess to configure weave vs
flannel vs calico. Arguably it was because of the third party implementation
themselves, but then I would argue that comparing the superiority of k8s to
docker swarm is an apples to oranges comparison...Since swarm's networking was
always built in and opinionated.

For a long time , it worked much better than k8s.

------
stackzero
There are certainly many moving parts- but there's nothing quite like running
kubectl create -f <somefile> and sitting back to watch your entire application
stack spin up

------
doxcf434
A better question would be how much simpler does k8s make thing compared to
all the rube goldberg scripts admins have written over the years? A lot.

------
peterwwillis
K8s does a relatively good job at tackling a very large problem space, but it
never tried to tackle the _other_ problem space: learning and using
complicated things quickly and easily.

The fact that there is no tool to walk a human through building commonly
reproduced configuration files is proof that humans were an afterthought.

------
Steeeve
Yes. There wouldn't be so many different vendor solutions if it wasn't. I
wouldn't say that the complexity isn't warranted, and it obviously can be
wrangled in and once you know it it's not all that bad.

Conversely, it wouldn't be as popular as it is if the value it provided wasn't
clear.

------
jkmcf
I’ve always sorta felt they solved the problem backwards, focusing on the
infrastructure and not on running applications. Now we have all these
competitors/confusion around orchestrating running an app. They could have
just copied Heroku’s CLI as a base starting point and ease adoption.

------
ksajadi
For a system that needs to take care of a great deal of different use cases on
a wide variety of infrastructure, k8s is surprisingly simple in context.

However I also think for a great deal of “every day” use cases solutions like
GKE, EKS, Rancher, Cloud 66,... are good enough.

------
brown9-2
Interesting that the article never mentions the word “container” (outside of a
quoted tweet).

If you don’t want a cluster management system that can schedule your many
containers over a pool of many nodes, then yes Kubernetes is not aiming to
solve your problems.

------
thdxr
Yes it's too complicated. But I think it's a low level tool that most people
will use via an abstraction built on top. We use Rancher which makes using
kubernetes a breeze.

------
lifeisstillgood
Side note - i stumbled across an attempt to rebuild K8 from scratch in python
(it looked like a learning attempt) - but i forgot to bookmark it - anyone
seen something similar

~~~
justinsaccount
Are you sure it was k8s, and not maybe docker?

[https://github.com/tonybaloney/mocker](https://github.com/tonybaloney/mocker)

------
megaman22
Yes, it's madness, unless your system is actually that complicated.

And I'm not sure designing a system to the strengths of k8s is wise either.

------
andyidsinga
this: $WORK

Maybe the most concise statement I've seen on the meaning of employment in our
times. ..really reminds me of PG's refragmentation essay
[http://paulgraham.com/re.html](http://paulgraham.com/re.html)

------
jaequery
Is there a link to the tweet where he asked "what did you benefit from using
K8?"

~~~
jaequery
ok it's
[https://twitter.com/krisnova/status/994242908315312129](https://twitter.com/krisnova/status/994242908315312129)

------
simonebrunozzi
Am I the only one who thinks it should be K8S, with a capital "S"?

~~~
dankohn1
It's a numeronym. From the horse's mouth:
[https://twitter.com/dankohn1/status/982013434555523072](https://twitter.com/dankohn1/status/982013434555523072)

------
jaequery
what are some of the downsides of K8? any horror stories?

~~~
fear91
You need to hire people who are competent with a new, quickly evolving
technology. That means it's hard to hire someone good and it's expensive.

The upside is that you can replace multiple "normal" ops people with just a
few K8 folks but that also requires a bit of culture change of how you do
development.

~~~
AndyNemmity
I started a year ago on a team, and there are so many different technologies,
things to know, gotchas everywhere, that even competent people can struggle
outside of taking a single domain and owning it.

It is not remotely easy to actually use and understand all parts of a massive
system, at least ours.

------
Yetanfou
Might be, but one thing it certainly is not: worthy of yet another acronym.
Call the thing by its name, kubernetes. T2t w3d be so m2h e4r to r2d.

