
Why is Kubernetes getting so popular? - a7b3fa
https://stackoverflow.blog/2020/05/29/why-kubernetes-getting-so-popular/
======
zelly
Main benefits of Kubernetes:

• Lets companies brag about having # many production services at any given
time

• Company saves money by not having to hire Linux sysadmins

• Company saves money by not having to pay for managed cloud products if they
don't want to

• Declarative, version controlled, git-blameable deployments

• Treating cloud providers like cattle not pets

It's going to eat the world (already has?).

I was skeptical about Kubernetes but I now understand why it's popular. The
alternatives are all based on kludgy shell/Python scripts or proprietary cloud
products.

It's easy to get frustrated with it because it's ridiculously complex and
introduces a whole glossary of jargon and a whole new mental model. This isn't
Linux anymore. This is, for all intents and purposes, a new operating system.
But the interface to this OS is a bunch of <strike>punchcards</strike> YAML
files that you send off to a black box and hope it works.

You're using a text editor but it's not programming. It's only YAML because
it's not cool to use GUIs for system administration anymore (e.g. Windows
Server, cPanel). It feels like configuring a build system or filling out taxes
--absolute drudgery that hopefully gets automated one day.

The alternative to K8s isn't your personal collection of fragile shell
scripts. The real alternative is not doing the whole microservices thing and
just deploying a single statically linked, optimized C++ server that can serve
10k requests per second from a toaster--but we're not ready to have that
discussion.

~~~
pwdisswordfish2
I am ready! NetBSD is running on the toaster. I think haproxy can do 10K
req/s. tcpserver on the backends. I only write robust shell scripts, short and
portable.

As a spectator, not a tech worker who uses these popular solutions, I would
say there seems to be a great affinity amongst in the tech industry for
anything that is (relatively) complex. Either that, or the only solutions
people today can come up with are complex ones. The more features and
complexity, the more something is constantly changing, the more a new solution
gains "traction". If anyone reading has examples that counter this idea,
please feel free to share them.

I think if a hobbyist were to "[deploy] a single statically linked, optimized
[C++] server that can serve 10k requests per second from a toaster" it would
be like a tree falling in the forest. For one because it is too simple, it
lacks the complexity that attracts the tech worker crowd, and second, because
it is not being used by well-known tech company and not being worked on by
large numbers of people, it would not be newsworthy.

~~~
tutfbhuf
I can see your point for small hobby projects. But enterprise web development
in C++ is no fun at all. For example: "Google says that about 70% of all
serious security bugs in the Chrome codebase are related to memory management
and safety." [https://www.zdnet.com/article/chrome-70-of-all-security-
bugs...](https://www.zdnet.com/article/chrome-70-of-all-security-bugs-are-
memory-safety-issues/)

Developer time for fixing these bugs is in most cases more expensive, than to
throw more hardware at your software written in a garbage collected language.

~~~
Perseids
True, but the alternative to C++ with that reasoning is Rust or Go (depending
on your liking), not Ruby. And with both of these you can step around a lot of
deployment issues, because a single server can be sufficient for quite high
loads. Avoid distributed systems as much as you can:
[https://thume.ca/2020/05/17/pipes-kill-
productivity/](https://thume.ca/2020/05/17/pipes-kill-productivity/)

~~~
fxtentacle
It depends.

If you want to be a successful indie company, avoid cloud and distributed like
the plague.

If you want to advance in the big corp career ladder, user Kubernetes with as
many tiny instances and micro-services as you can.

"Oversaw deployment of 200 services on 1000 virtual servers" sounds way better
than "started 1 monolithic high-performance server". But the resulting SaaS
product might very well be the same.

~~~
waheoo
I just tell people I use a mono repo to house all my single file
microservices.

Php under apache.

~~~
fxtentacle
Great description!

I run a monolithic ensemble that abstracts away the concept of multiple
processes to deliver a unified API.

In short, it's multithreaded.

------
bradgessler
I’ll take a shot.

k8s is popular because Docker solved a real problem and Compose didn’t move
fast enough to solve orchestration problem. It’s a second order effect; the
important thing is Docker’s popularity.

Before Docker there were a lot of different solutions for software developers
to package up their web applications to run on a server. Docker kind of solved
that problem: ops teams could theoretically take anything and run it on a
sever if it was packaged up inside of a Docker image.

When you give a mouse a cookie, it asks for a glass of milk.

Fast forward a bit and the people using Docker wanted a way to orchestrate
several containers across a bunch of different machines. The big appeal of
Docker is that everything could be described in a simple text file. k8s tried
to continue that trend with a yml file, but it turns out managing
dependencies, software defined networking, and how a cluster should behave at
various states isn’t true greatest fit for that format.

Fast forward even more into a world where everybody thinks they need k8s and
simply cargo cult it for a simple Wordpress blog and you’ve got the perfect
storm for resenting the complexity of k8s.

I do miss the days of ‘cap deploy’ for Rails apps.

~~~
foxhill
> Docker solved a real problem

> everybody thinks they need k8s and simply cargo cult it for a simple
> Wordpress blog

docker _also_ has this problem though. there are probably 6 people in the
world that need to run one program built with gcc 4.7.1 linked against libc
2.18 and another built with clang 7 and libstdc++ at the same time on the same
machine.

and yes, docker "provides benefits" other than package/binary/library
isolation, but it's _really_ not doing anything other than wrapping cgroups
and namespacing from the kernel - something for which you don't need docker to
do (see [https://github.com/p8952/bocker](https://github.com/p8952/bocker)).

docker solved the wrong problem, and poorly, imo: the packaging of
dependencies required to run an app.

and now we live in a world where there are a trillion instances of musl libc
(of varying versions) deployed :)

sorry, this doesn't have much to do with k8s, i just really dislike docker, it
seems.

~~~
alephu5
I am a big fan of using namespaces via docker, in particular for development.
If I want to test my backend component I can expose a single port and then
hook it up to the database, redis, nginx etc. via docker networks. You don't
need to worry about port clashes and it's easy to "factory reset".

In production this model is quite a good way to guarantee your internal
components aren't directly exposed too.

~~~
foxhill
that's sort of my point though - namespacing is a great feature that allows
for more independent & isolated testing and execution, there is no doubt.
docker provides none of it.

i would argue that relying on docker hiding public visibilty of your internal
components is akin to using a mobile phone as a door-stop - it'll probably
work but there are more appropriate (and auditable) tools for the job.

------
silviogutierrez
For me, and many others: infrastructure as code.

Kubernetes is _very_ complex and took a _long_ time to learn properly. And
there have been fires among the way. I plan to write extensively on my blog
about it.

But at the end of the day: having my entire application stack as YAML files,
fully reproducible [1] is invaluable. Even cron jobs.

Note: I don't use micro services, service meshes, or any fancy stuff. Just a
plain ol' Django monolith.

Maybe there's room for a simpler IAC solution out there. Swarm looked
promising then fizzled. But right now the leader is k8s[2] and for that alone
it's worth it.

[1] Combined with Terraform

[2] There are other proprietary solutions. But k8s is vendor agnostic. I can
and _have_ repointed my entire infrastructure with minimal fuss.

~~~
bosswipe
According to the article you are wrong about "infrastructure as code".
Kubernetes is infrastructure as data, specifically YAML files. Puppet and Chef
are infrastructure as code.

Edit: not sure why the down votes, I was just trying to point out what seems
like a big distinction that the article is trying to make.

~~~
silviogutierrez
Maybe? I'm too lazy to formally verify if the YAML files k8s accepts are
Turing complete. With kustomize they might very well be.

How about "infrastructure-as-some-sort-of-text-file-versioned-in-my-
repository". It's a mouthful, but maybe it'll catch on.

~~~
geofft
They don't do loops or recursion. They don't even do iterative steps in the
way that Ansible YAML has plays/tasks.

Yes, higher-level tools like Kustomize or Jsonnet or whatever else you use for
templating the files are Turing-complete - but that's at the level of you on
your machine generating input to Kubernetes, not at the level of Kubernetes
itself. That's a valuable distinction - it means you can't have a Kubernetes
manifest get halfway through and fail the way that you can have an Ansible
playbook get halfway through and fail; there's no "halfway." If something
fails halfway through your Jsonnet, it fails in template expansion without
actually doing anything to your infrastructure.

(You can, of course, have it run out of resources or hit quota issues partway
through deploying some manifest, but there's no ordering constraint - it won't
refuse to run the "rest" of the "steps" because an "earlier step" failed,
there's no such thing. You can address the issue, and Kubernetes will resume
trying to shape reality to match your manifest just as if some hardware failed
at runtime and you were recovering, or whatever.)

------
tristor
The simple answer is that Kubernetes isn't really any of the things it's been
described as. What it /is/, though, is an operating system for the Cloud. It's
a set of universal abstraction layers that can sit on top of and work with any
IaaS provider and allows you to build and deploy applications using
infrastructure-as-code concepts through a standardized and approachable API.

Most companies who were late on the Cloud hype cycle (which is quite a lot of
F100s) got to see second-hand how using all the nice SaaS/PaaS offerings from
major cloud providers puts you over a barrel and don't have any interest in
being the next victim, and it's coming at the same time that these very same
companies are looking to eliminate expensive commercially licensed proprietary
software and revamp their ancient monolithic applications into modern
microservices. The culimination of these factors is a major facet of the
growth of Kubernetes in the Enterprise.

It's not just hype, it has a very specific purpose which it serves in these
organizations with easily demonstrated ROI, and it works. There /are/ a lot of
organizations jumping on the bandwagon and cargo-culting because they don't
know any better, but there are definitely use cases where Kubernetes shines.

~~~
ransom1538
Just running docker-compose on load balanced machines is pretty close to
having all k8s features (that would give you an endpoint, scaling, running
pods[containers],heartbeats and nodes[vms]). If you run Kubernetes on GCP you
will see it's just a wrapper of GCP vms, load balancers, instance groups and
disks. EG: GCP k8's autoscaling for the nodes isn't any better than just
simple GCP load balancers and instance groups (it literally is the same
thing). k8's best feature (only?): specify yaml files to declare the setup.
That is great! But, you make edits to this 4 times a year - that is a ton of
complexity for those 4 git commits.

~~~
crymer11
Off the top of my head, deployments and sidecars are both missing, which are
incredibly useful.

~~~
viraptor
Change docker-compose to load balanced dokku nodes. Gives you the deployments
and sidecars.

~~~
jedieaston
Dokku is magical. It blows my gob whenever I use it. It’s the best parts of
Docker and Heroku together, and I can actually control everything that goes
with my app.

~~~
Aeolun
I find that Dokku requires me to do too much on the machine itself, well as
forcing me into a certain model.

I currently use exoframe with docker-compose files, and it's fantastic.

~~~
viraptor
Haven't seen this one - it looks really good, thanks. I also like they
integrated traefik in the solution.

------
clutchdude
It's not because of the networking stack.

I've yet to meet anyone who can easily explain how the CNI, services,
ingresses and pod network spaces all work together.

Everything is so interlinked and complicated that you need to understand vast
swathes of kubernetes before you can attach any sort of complexity to the
networking side.

I contrast that to it's scheduling and resourcing components which are
relatively easy to explain and obvious.

Even storage is starting to move to overcomplication with CSI.

I half jokingly think K8s adoption is driven by consultants and cloud
providers hoping to ensure a lock-in with the mechanics of actually deploying
workloads on K8s.

~~~
mrweasel
Assuming that like us, you spend the last 10 - 12 years deploying IPv6 and
currently running servers on IPv6 only networks, the Kubernetes/Docker network
stack is just plain broken. It can be done, but you need to start thinking
about stuff like BGP.

Kubernetes should have been IPv6 only, with optional IPv4 ingress controllers.

~~~
geggam
You mean you dont like 3+ layers of Nat VIA iptables ?

~~~
dijit
That's already happening anyway.

~~~
pojzon
But mostly you are not responsible for those components or are using hardware
solutions which are 1000 times more efficient/performant ?

------
acd
Devops/arch here, I think Kubernetes solves deployment in a standardized way
and we get fresh clean state with every app deploy. Plus it restarts
applications/pods that crashes.

That said I think Kubernetes may be at its Productivity journey on the tech
Hype cycle. Networking in Kubernetes is complicated. This complication and
abstraction has a point if you are a company at Google scale. Most shops are
not Google scale and do not need that level of scalability. The network
abstraction has its price in complexity when doing diagnostics.

You could solve networking differently than in Kubernetes with IPv6. There is
not a need for complicated IPv4 nat schemes. You could use native ipv6
addresses that are reachable directly from the internet. Since you have so
many ipv6 addresses you do not need Routers/Nats.

Anyhow in a few years time some might be using something simpler like an open
source like Heroku. If you could bin pack the services / intercommunication on
the same nodes there would be speed gains from not having todo network hops
going straight to local memory. Or something like a standardized server less
open source function runner.

[https://en.wikipedia.org/wiki/KISS_principle](https://en.wikipedia.org/wiki/KISS_principle)
[https://en.wikipedia.org/wiki/Hype_cycle](https://en.wikipedia.org/wiki/Hype_cycle)

~~~
takeda
This is a good point, I was wondering why IPv6 is being avoided so hard.

There are many arguments that IPv6 didn't solve too many IPv4 pain points, but
if it solved something is definitively this.

------
Bob_LaBLahh
Kubernetes is getting more popular because:

1) It solves _many_ different universal, infrastructure-level problems. 2)
More people are using containers. K8s helps you to manage containers. 3) It's
vendor agnostic. It's easy to relocate a k8s application to a different
cluster 5) People see that it's growing in popularity. 6) It's Open source. 7)
It helps famous companies run large-scale systems. 8) People think that it
looks good on a resume and they want to work at a well known company. 9) Once
you've mastered K8s, it's easy to use on problems big and small. (Note, I'm
not talking about installing and administrating the cluster. I'm talking about
being a cluster user.) 10) It's controversial which means that people keep
talking about it. This gives K8s mind share.

I'm not saying K8s doesn't have issues or downsides.

1) It's a pain to install and manage on your own. 2) It's a lot to learn--
especially if you don't think you're gonna use most of it's features. 3) While
the documentation has improved a lot, it's still weak and directionless in
places.

I think K8s is growing more popular because it's pros strongly outweigh it's
cons.

(Note I tried to be unbiased on the subject, but I am a K8s fan--so much so
that I wrote a video course on the subject: [https://www.true-
kubernetes.com/](https://www.true-kubernetes.com/). So, take my opinions with
a grain of salt.)

------
heipei
My question is: Why is only k8s so popular when there are better alternatives
for a large swath of users? I believe the answer is "Manufactured Hype". k8s
is from a purely architectural standpoint the way to go, even for smaller
setups, but the concrete project is still complex enough that it requires
dozens of different setup tools and will keep hordes of consultants as well as
many hosted solutions from Google/AWS/etc in business for some time to come,
so there's a vested interest in continuing to push it. Everyone wins, users
get a solid tool (even if it's not the best for the job) and cloud providers
retain their unique selling point over people setting up their own servers.

I still believe 90% of users would be better served by Nomad. And if someone
says "developers want to use the most widely used tech", then I'm here to call
bullshit, because the concepts between workload schedulers and orchestrators
like k8s and nomad are easy enough to carry over from one side to the other.
Learning either even if you end up using the other one is not a waste of time.
Heck, I started out using CoreOS with fleetctl and even that taught me many
valuable lessons.

~~~
jsmith12673
I got a bit disillusioned with k8s and looked at Nomad as an alternative.

As a relatively noob sysadmin, I liked it a lot. Easy to deploy and easy to
maintain. We've got a lot of mixed rented hardware + cloud VPS, and having one
layer to unify them all seemed great.

Unfortunately I had a hard convincing the org to give it a serious shot. At
the crux of it, it wasn't clear what 'production ready' Nomad should look
like. It seemed like Nomad is useless without Consul, and you really should
use Vault to do the PKI for all of it.

It's a bit frustrating how so many of the HashiCorp products are 'in for
penny, in for a pound' type deals. I know there's _technically_ ways for you
use Nomad without Consul, but it didn't seem like the happy path, and the
community support was non-existent.

Please tell me why I'm wrong lol, I really wanted to love Nomad. We are
running a mix of everything and its a nightmare

~~~
chucky_z
Nomad + Consul is the happy path. Adding Vault into the mix is nice, but not
required.

Consul by itself is the game-changer. Even in k8s it's a game-changer. It
solves so many questions in an elegant way.

"How do I find and reach the things running in (orchestrator) with (unknown
ip/random port) from (legacy)?" being the most important. You run 5 servers,
and a relatively lightweight client on everything (which isn't even outright
required, but it sure is useful!), and you get a _lot_ with that.

Consul provides multiple interfaces and ingress points to find everything. It
also is super easy to operate, and has a pretty big community.

If you absolutely cannot have Consul, Nomad is still a really good batch job
engine, and makes a very great "distributed cron," which is more extensible,
scalable, and easy to use than something like Jenkins for the same task.

My team is pretty small (was 4 people, now 6) and we manage one of the worlds
largest nomad and consul clusters (there are some truly staggeringly large
users of Vault so I won't make that claim). Even when shit really hits the
fan, everything is designed in a way that stuff mostly works; and there's
enough operator friendly entry points that we can always figure out the
problem.

~~~
jsmith12673
Interesting, thanks for sharing!

I'm assuming your team is using vault for PKI, but is there a similarly happy
path for issuing certs without Vault.

I started off just using `openssl` but it all felt very janky, and I didn't
really have any idea how CRLs should be setup

~~~
chucky_z
Vault is great for just a PKI, even if you aren't using it for anything else.
There are some tools that _just_ do PKI, but Vault works a real treat at it.
Any Terraform backend that supports encryption + Terraform + Vault gives you
such an amazing workflow. We use a mix of short and long certs, with different
roles based on what's getting a cert.

For now, we have CRLs disabled on all short-lived backends, enabled on long-
lived backends and we're actually looking at disabling storing short-lived
certs in the storage system at all, and just cranking the TTL down to really
truly short. We've tested it as low as 30m, but a more real-world max-ttl is 1
week, with individual apps setting it as low as they can handle. For reference
we run more than 10 PKI backends, and adding one (or a bunch) more is just a
little terraform snippet for us.

The way it works via hashicorp template land, is that you just plop

    
    
        {{ with secret "name-of-pki/issue/name-of-role" "common_name=my.allowed.fqdn" "ttl=24h" }} {{ .Data.certificate }} {{ end }}
    

into your Nomad template stanza, or use consul-template directly as a binary,
or use vault agent with it's template capability. You can get the CA chain if
required the same way, just hitting a different PKI endpoint.

Also, as of Vault 1.4, Vault's internal raft backend is now production ready,
making it a snap to run.

Try running through a few of the Vault quick-start guides, and replicating
them in Terraform as much as possible. There's a few things TF does not handle
gracefully last I checked (initial bootstrap), but you can get around that by
using a null_resource or just handling that outside Terraform.

------
sp332
I'd say it's down to two things. First is the sheer amount of work they're
putting into standardization. They just ripped out some pretty deep internal
dependencies to create a new storage interface. They have an actual standards
body overseen by the Linux Foundation. So I agree with the blog post there.

The second reason is also about standards, but using them more assertively.
Docker had way more attention and activity until 2016 when Kubernetes
published the Container Runtime Interface. By limiting the Docker features
they would use, they leveled the playing field between Docker and other
runtimes, making Docker much less exciting. Now, new isolation features are
implemented down at the runc level and new management features tend to target
Kubernetes because it works just as well with any CRI-compliant runtime.
Developing for Docker feels like being locked in.

~~~
MuffinFlavored
> By limiting the Docker features they would use, they leveled the playing
> field between Docker and other runtimes, making Docker much less exciting.

Isn't the most popular k8s case to deploy Docker images still though?

~~~
moduspwnens14
It's confusing, but Docker images (and image registries) are also an open
standard that Docker implements [1].

A lot of the Kubernetes "cool kids" just run containerd instead of Docker.
Docker itself also runs containerd, so when you're using Kubernetes with
Docker, Kubernetes has to basically instruct Docker to set up the containers
the same way it would if it were just talking to containerd directly. From a
technical perspective, you're adding moving parts for no benefit.

If you use containerd in your cluster, you can then use Docker to build and
push your images (from your own or a build machine), but pull and run them on
your Kubernetes clusters without Docker.

[1]
[https://en.wikipedia.org/wiki/Open_Container_Initiative](https://en.wikipedia.org/wiki/Open_Container_Initiative)

------
ridruejo
It makes a bit more sense if you see Kubernetes as the new Linux: a common
foundation that the industry agrees on, and that you can build other
abstractions on top of. In particular Kubernetes is the Linux Kernel, while we
are in the early days of discovering what the "Linux distro" equivalent is,
which will make it much more friendly / usable to a wider audience

~~~
moduspwnens14
This is exactly how we see it at my company.

Likewise, Linux is also a confusing mess of different parts and nonsensical
abstractions when you first approach it. It does take some time to understand
how to use it, and in particular how to do effective troubleshooting when
things aren't working the way you expect.

But I 100% agree--I think it's the new Linux. In 5-10 years, it'll be the "go
to", if not sooner.

------
aprdm
In my humble opinion because there is so much money and marketing behind it.
If you go attend the OSS summit all the cloud players are sending evangelizers
and having the whole conference to be about Kbuernetes.

Then a lot of people drink the koolaid and apply it everywhere / feel they're
behind if they aren't in Kubernetes.

We _are not_ in Kubernetes and have multiple datacenters with thousands of
VMs/containers. We are doing just fine with the boring consul/systemd/ansible
set up we have. We also have somethings running in Containers but not much.

Funnily enough at the OSS summit I had a couple of chats with people in the
big companies (AWS, Netflix, etc.) and they themselves have the majority of
their workflows in boring VMs. Just like us.

~~~
takeda
There is quite a bit of latency added whenever you use a containers.

IMO containers are greatest for stateless apps that don't require much
resources, but having a dedicated machine for them is a waste.

------
holografix
Kubernetes is an almost necessary tech when you operate your own cloud and
that’s where it came from: Google.

The smart people at Google knew that by quickly packaging their own internal
tech and releasing it on open source they’d help people move from the
incumbent AWS.

Helping customers switch IaaS hurts the both, lock in is better, but it hurts
AWS way more. Proof? They made it free to run the necessary compute behind K8s
control plane, until recently that was.

Are there benefits on running your biz’ web app using constructs made for a
“cloud”? Sure there is, that’s why people are moving to K8s. There is real
business benefits, given a certain amount of necessary moving parts. LinkedIn
had such a headache with this they created Kafka.

I suspect most organisations’ Architects and IT peeps push for K8s as a moat
for their skills and to beef up their resumé. They know full well that the
value is not there for the biz’ but there’s something in it for them.

------
cutler
I remember when Docker and K8 emerged and YAGNI kept everything in
perspective. Unless you had a fleet of hundreds of servers to manage and spin-
up at a moment's notice you just used Chef, Puppet or Ansible. Now nothing's
too small for this ridiculously over-engineered technology. Got a WordPress
blog? You're doing it wrong if you don't put it in a Docker container and
launch it with K8. Same with a lot of Rails projects for which Capistrano was
more than aadequate. Just gotta scratch that itch until you've no more skin
left.

------
yllus
To draw anecdotally from my own experiences, its for two reasons:

1\. It's simple to get started with, but complex enough to tweak to your needs
in respect to simplicity of deployment, scaling and resource definition.

2\. It's appealingly cloud-agnostic just at the time where multiple cloud
providers are all becoming viable and competitive.

I think it's more #2 and #1; as always, timing is everything.

~~~
zmmmmm
Yeah, I think people are overthinking it. The real reason is that if you do a
superficial investigation you will quickly come back with the impression that
k8s is near universally supported across cloud vendors and gives an appearance
of providing a portable solution where otherwise the only alternative would be
vendor lock-in. It makes it a no-brainer for anybody starting out with a new
cloud deployment.

------
nasmorn
I host about a dozen rails apps of different vintage and started switching
from Dokku to Digital Ocean Kubernetes. I had a basic app deployed with load
balancer and hosted DB in about 6 hours. Services like the nginx ingress are
very powerful and it all feels really solid. I never understood Dokku
internals either so them being vastly simpler is no help for me. I figured for
better or worse kubernetes is here to stay and on DO it is easier than doing
anything on AWS really. I have used AWS for about 5 years and have inherited
things like terraformed ECS clusters and Beanstalk apps. I know way more about
AWS but I feel you need to know so much that unless you only do ops you cannot
really keep up.

~~~
koeng
I found deploying databases with Dokku to be really intuitive. CockroachDB is
great, but still a lot more steps than dokku postgres:create <db>. The whole
certificates thing is quite confusing. Otherwise, k3s on-prem is great

------
dustym
I like to say (lovingly) that Kubernetes takes complex things and simplifies
them in complex ways.

~~~
battery_cowboy
It just hides the complexity in some yml files instead of in a deploy script
or a sysadmin's head.

~~~
happytoexplain
You say that like it's a bad thing. A declarative model is infinitely better
for representing complex systems than scripts and mind space. The challenge is
actually being able to get to that point.

~~~
geggam
Until it breaks.

~~~
happytoexplain
No, even then it's still better. A broken declarative model is better than a
broken script.

~~~
battery_cowboy
An sh script is a series of things that almost any developer who uses Linux
can understand: command line statements. We use them all the time. Suppose the
complicated declarative model you've made doesn't work one day and the person
who originally wrote it is gone? Even if you have someone to debug it who
knows the k8s languages: usually you can't just use the yml files alone, you
need terraform or something, plus maybe some other services and "sidecar"
containers that do other things for you. With a sh script, you just have the
script with a bunch of commands that you can understand and look up easily, in
a linear order, to figure out the problem. You might not understand every
command, but you can run each one until you get to the error, then focus in on
that area. With k8s, you need to figure out a huge series of intermixed deps
and networks and services just to start, then find the one that is failing (if
that is the one failing and it's not just being masked by another failed
service that you didn't know about).

------
closeparen
What does the Kubernetes configuration format offer over configuration
management systems like Ansible, Salt, Puppet, Chef, etc?

~~~
p_l
Having extensively used Chef and K8s, the difference is that they try to deal
with chaos in unmanaged way (Puppet is the closest to "managed"), but when
dealing with wild chaos you lack many ways of enforcing the order. Plus they
don't really do multi-server computation of resources.

What k8s brings to the table is a level of standardization. It's the
difference between bringing some level of robotics to manual loading and
unloading of classic cargo ships, vs. the fully automated containerized ports.

With k8s, you get structure where you can wrap individual program's
idiosyncracies into a container that exposes standard interface. This standard
interface allows you to then easily drop it into server, with various
topologies, resources, networking etc. handled through common interfaces.

I said that for a long time before, but recently I got to understand just how
much work k8s can "take away" when I foolishly said "eh, it's only one server,
I will run this the classic way. Then I spent 5 days on something that could
be handled within an hour on k8s, because k8s virtualized away HTTP reverse
proxies, persistent storage, and load balancing in general.

Now I'm thinking of deploying k8s at home, not to learn, but because I know
it's easier for me to deploy nextcloud, or an ebook catalog, or whatever,
using k8s than by setting up more classical configuration management system
and deal with inevitable drift over time.

~~~
naringas
> Now I'm thinking of deploying k8s at home, not to learn, but because I know
> it's easier for me to deploy nextcloud, or an ebook catalog

can't you do that just with containers?

~~~
aequitas
But what do you use to manage those containers and surrounding infra
(networking, proxies, etc)? I've been down the route of using Puppet for
managing Docker containers on existing systems, Ansible, Terraform,
Nomad/Consul. But in the end it all is just tying different solutions together
to make it work. Kubernetes (in the form of K3s or a other lightweight
implementation) just works for me, even in a single server setup. I barely
have to worry about the OS layer, I just flash K3s to a disk and only have to
talk to the Kubernetes API to apply declarative configurations. Only things
I'm sometimes still need the OS layer for is networking, firewall or hardening
of the base OS. But that configuration is mostly static anyways and I'm sure I
will fine some operators for that to manage then through the Kubernetes API as
IaC if I really need to.

~~~
bisby
I used to have a bunch of bash scripts for bootstrapping my docker containers.
At one point I even made init scripts, but that was never fully successful.

And then one day I decided to set up kubernetes as a learning experiment.
There is definitely some learning curve about making sure I understood what
deployment, or replicaset or service or pod or ingress was, and how to
properly set them up for my environment. But now that I have that, adding a
new app to my cluster, and making it accessible is super low effort. i have
previous yaml files to base my new app's config on.

It feels like the only reason not to use it would be learning curve and
initial setup... but after I overcame the curve, it's been a much better
experience than trying to orchestrate containers by hand.

Perhaps this is all doable without kubernetes, and there is a learning curve,
but it's far from the complicated nightmare beast everyone makes it out to be
(from the user side, maybe from the implementation details side)

------
nelsonenzo
as a sys-admin, I like k8s because it solves sys-admin problems in a
standardized way. Things like, safe rolling deploys, consolidated logging,
liveness and readiness probes, etc. And yes, also because it's repeatable. It
takes all the boring tasks of my job and let's me focus on more meaningful
work, like dashboards and monitoring.

~~~
honkycat
Yep, same here. Once you learn it, it is a standardized consistent API and
becomes a huge force multiplier

~~~
p_l
k8s is a lever to scale sysadmins power, not scale services to huge numbers.
:)

------
mancini0
Lets use Bazel, and Bazel's rules_k8s to build\containerize\test\deploy only
the microservices of my monorepo that changed.

Lets use Istio's "istioctl manifest apply" to deploy a service mesh to my
cluster that allows me to pull auth logic / service discovery / load balancing
/ tracing out of my code and let Istio handle this.

Lets configure my app's infrastructure (Kafka (Strimzi), Yugabyte/Cockroach,
etc) as yaml files. Being able to describe my kafka config (foo topic has 3
partitions, etc) in yaml is priceless.

Lets move my entire application and its infrastructure to another cloud
provider by running a single bazel command.

k8s is the common denominator that makes all this possible.

~~~
MuffinFlavored
> k8s is the common denominator that makes all this possible.

can't... terraform make all of that possible?

~~~
p_l
Terraform explicitly doesn't want to deal with deployment of stuff that is
inside VMs etc. and tries to tell you to use managed services or cloud-config
yamls as the solution.

You can write your own providers, you can use the provisioned support, but TF
doesn't like that and it shows.

------
gatvol
K8s is great - if you are solving infrastructure at a certain scale. That
scale being a Bank, Insurance Company or mature digital company. If you're not
in that class then it's largely overkill/overcomplex IMO when you can simply
use Terraform plus managed Docker host like ECS and attach cloud-native
managed services.

Again the cross cloud portability is a non starter, unless you're really at
scale.

~~~
p_l
Hard disagree.

What k8s really scales is the developer/operator power. Yes, it is complex,
but pretty much all of it is _necessary_ complexity. At small enough scale
with enough time, you can dig a hole with your fingers - but a proper tool
will do wonders to how much digging you can do. And a lot of that complexity
is present even when you do everything the "old" way, it's just invisible
toil.

And a lot of the calculus changes when 'managed services' stop being cost
effective or aren't an option at all, or you just want to be able to migrate
elsewhere (that can be at low scale too, because of being price conscious).

~~~
gatvol
We have a mature TF module library and can roll out complex, well configured
infra in a matter of hours, reliably. That said it's platform specific.

Sure, managed service costs are certainly a thing, but to my point that only
really start to become an issue at significant scale, assuming you're well
configured.

~~~
p_l
Or when you're small bootstrapped company.

The cost metrics that make "it's cheaper to use managed service than pay the
cost of extra engineer to specialize in infrastructure" aren't universal. In
fact, I usually have to work from the opposite direction, where hiring a
senior Ops specialist who can wrangle everything from shelving the physical hw
to network booting k8s cluster on-premises can be cheaper that Heroku/AWS/etc.

------
hyperbovine
Because Google made it. Same thing with Tensorflow. And, fun fact, both are
massively overhyped and a real PITA to learn and use. But Google uses it, so
hey.

~~~
t_sawyer
This just isn’t true. I’ve never used Tensorflow but Kubernetes is great.

We moved to it from docker swarm because docker swarm still has a lot of
glitches with its overlay network. Rolling upgrades would leave stale network
entries and its impossible to reproduce. Sometimes it happens sometimes it
doesn’t.

With a managed solution, Kubeadm, or RKE it’s not hard to deploy anymore. All
our infrastructure is in code, is immutable, and if you’re careful can be
deployed into any kubernetes cluster.

Just like Docker has been great for easily deploying open source products,
kubernetes is great for doing the same thing when you need to deploy
horizontally. It’s easy for OSS to provide a docker image, a docker compose
file for single node deploy, and Kubernetes yaml for a horizontal deploy.

------
jfrankamp
I'll add my use case: we use hosted kubernetes to deploy all of our branches
of all of our projects as fully functional application _stacks_, extremely
similarly to how they will eventually run in production. Want to try something
and show it to someone in the product owner level? Ok there will be a kube
nginx-ingress backed environment up in build-time + a few minutes.

The environments advertise themselves via that same modified ingress's default
backend. We stick a tiny bit of deploy yaml in our projects, the deployments
kube tagging gives us all the details we need to provide diffs, last build
time, links to git repos, web sites etc for the particular environment. The
yaml demonstrates conclusively how an app could or should be run, regardless
of os or software choice, so when we hand it to ops folks there is a basis for
them to run from.

------
namelosw
There's nothing too novel about Kubernetes, similar patterns could be seen in
Erlang many years ago, though in different abstraction levels.

However, because enterprise ops prior to Kubernetes are both costly and
brittle, Kubernetes just works for enterprises.

We had a huge PowerShell codebase and it was a nightmare to maintain. in the
meantime, it's no way as robust as Kubernetes.

It's just as simple as that: sure, Kubernetes seems to be complex, but most
enterprise stuff are even worse. At the same time, despite they are costly,
the quality is usually pretty crappy because those scripts are written under
delivery pressure.

------
cbushko
You can tell by the volume of comments how interested in the topic the
community is.

I've noticed that there are a lot of replies such as "it is overhyped" and "I
can just run a VM".

Kubernetes is not for you as your use case may not match what it does and
solves. Kubernetes provides a standard way of running your applications. It is
complex but logical. Yaml sucks but it is simple and logical. I prefer to use
terraform for kubernetes but it is the same thing, simple and logical. You
cannot say the same with puppet, chef, ansible etc. All of those configuration
tools are a big mess of different setups and scripts. I can go to any company
and understand how their system works quite quickly. It makes searching for
answers easy too because it is standard.

When you are running several services and there is an outage, it is a godsend.
You can instantly view the status of things, how they are configured and when
they changed. That is POWERFUL.

It takes a while to understand how all of the resources fit together but that
is the same case with any type of deployment system and/or operating system.

p.s. I am not running that huge of a system, maybe about 5k containers total
between dev, staging and prod. Maybe 500k requests a day. Running a couple
kubernetes clusters is significantly nicer than running things in ECS.

------
Fiahil
TIL about kudo, The Kubernetes Universal Declarative Operator. We've been
doing the exact same things in a custom go CLI for 2 years.

The kubernetes ecosystem is really amazing and full of invaluable resources.
It's vast, complex, but well-thought. Getting to know all ins and outs of the
project is time consuming. So much things to learn and so little time to
practice...

~~~
hartem_
I work on KUDO team. Would love to hear what you think about it. All devs hang
out in #kudo channel on Kubernetes community slack, please don’t hesitate to
join and say hi.

------
eyberg
There's a silent majority of people that don't use k8s (or containers) - hell
there is a significant portion of servers that don't even use linux. I find
the majority of engineers my age (mid 30s) think it is nothing more than
straight marketing - between said marketing fueled vc dollars and "every
company is a software company" there's a very good reason why k8s has taken
off but I'd ask the following:

Why should it have?

Many people I talk with will complain about security, performance and
complexity of k8s (and containers in general). Non-practicing engineers (read:
directors/vps-eng) will complain about the associated cost with administering
their k8s clusters both in terms of cloud cost and devops personnel cost.

Someone earlier mentioned it was the new wordpress - I don't think that's an
unfair comparison, although I would challenge the complexity/cost of it.

~~~
harpratap
You don't necessarily HAVE to use K8s to get advantage of it. Use something
like Knative and you're good to go. Google has Cloud run and Azure would soon
come up with some similar abstraction on top of kubernetes.

------
pyrophane
Honestly, at least with GKE, hosting applications on managed k8s is not that
complicated, to the point that I don't think it is a poor choice even for
small teams who might not need all the bells and whistles of k8s. That is, so
long as that small team is already on board with CI and containers.

------
biggestlou
Kubernetes got popular because it was the first system that came along that
provided a CRUD API for resources of all kinds, including custom resources
(CRDs), and was immediately compatible with public artifact hubs like
DockerHub and Google Container Registry. The second one is the real kicker
here, and I think is why Kubernetes "won" and Mesos et al did not. With Mesos
et al you had to set up your own artifact storage. As powerful as Mesos was,
there was no MesosHub.

Longer term, I think the contribution of Kubernetes will be getting us used to
a resource/API-driven approach to infrastructure that abstracts away cloud
providers, hardware, etc. But it will probably be superseded in the coming
years by something that honors similar API "contracts." Probably written in
Rust _troll_

------
bvandewalle
I'm using Kubernetes extensively in my day to day work and once you get it up
and running and learn the different abstraction, it becomes a single API to
manage your containers, storage and network ingress needs. Making it easy to
take a container and getting it up and running in the cloud with an IP address
and a DNS configured in a couple API calls (or defined as YAMLs).

That being said, I will also be the first one to recognize that PLENTY of
workloads are not made to run on Kubernetes. Sometimes it is way more
efficient to spawn an EC2/GCE instance and run a single docker container on
it. It really depends on your use-case.

If I had to run a relatively simple app in prod I would never use Kubernetes
to start with. Kubernetes starts to pay itself off once you have a critical
mass of services on it.

~~~
harpratap
One could argue if you have a tiny set of services you are better off using a
managed offering like AWS Lambda or Cloud Run

~~~
garethmcc
There are organisations with 1000's of services on Serverless seeing enormous
benefits in reduced management overhead and reduced costs compared to the
Kubernetes solution they previously ran.

~~~
bvandewalle
My issue with serverless though is that you need to refactor your code to make
it work specifically for it. If you don't start to think serverless on day one
it gets more and more difficult to convert to it down the road.

------
hinkley
Kubernetes, I think, exists in the 'hedgehog' at the middle of the diagram of
various drives and fears.

There is some tech so simple that you just learn it and start using it, others
that you know you can pick up when the time is right.

And software you would be happy to invest time in... as long as someone is
paying you to do it, software you fear might keep you from getting a job if
you don't invest in it.

There is software so simple it might be right (it isn't) and software so
complicated that it must be important if people are using it/working on it.

So it's not that Kubernetes is good, it's just that it makes people neurotic
enough to jump on the bandwagon. Been a few of those in my career. A few have
stuck, most have not.

------
suchitpuri
Kubernetes is insurance for companies against getting locked in proprietary
technologies.

It also promotes immutable infrastructure and hence increases the portability.
While some of the things like load balancers and ingress are controlled by
cloud provider almost everything else can be seamlessly migrated to another
cloud provider or on prem.

It makes dev, test, staging, prod environments consistent and also solves a
lot of pain points of managing infrastructre at scale with autoscaling, auto
healing and more. Istio adds a lot more kubernetes and makes the supporting
microservices even easier.

Its going to be an important piece in Hybrid world as it brings a lot of
standardization and consistency in two disparate environments.

------
justicezyx
I mean, k8s draws experience from 12+ years of thousands of high caliber
engineers. It's like deliver modern cars to Chinese market in 1970s time. Of
course it will be popular...

------
INTPenis
I can only speak for myself as a relatively late adopter, right around early
2020 this year.

I only consider that late because I've been reading the hype around k8s for
many years already.

Became a late adopter of containers just before k8s actually. Now I've
migrated most of my setups both privately and professionally to containers.
And setup my first k8s clusters both at work and in my homelab.

So my perspective is that containers are first and foremost an amazing way of
deploying software because all that complexity I did in ansible to deploy the
software has been moved to the container image.

The project itself now, be it Mastodon, Jitsi, Synapse to name a few, package
most of their product for me in automatic build pipelines. All I need to do is
run and configure it.

And therefore, moving on to k8s, it would stand to reason that some of those
services are able to be clustered. Where better to do such clustering than
k8s?

That's just an ops perspective. We also have devs where I work and with k8s
they're able to deploy anything from routes down to their services using
manifests in CD pipelines. What's not to like?

Only reason one might get disenchanted with k8s is if you expect it to be a
one-stop solution for your aging .net application. Not saying you can't deploy
that in k8s, I'm just using it as an example of something that might not be
microservice ready.

------
joana035
Kubernetes is getting popular because it is a no-brainer api to existing
things like firewall, virtual ip, process placement, etc.

It's basically running a big computer without even trying.

------
seph-reed
It's a developer tool made originally by google. Of course it's popular. Which
isn't to say it's bad, it's just not much of a question as to why it's
popular.

\-------

Kubernetes - kubernetes.io

Kubernetes is an open-source container-orchestration system for automating
application deployment, scaling, and management. It was originally designed by
Google, and is now maintained by the Cloud Native Computing Foundation.

Original author(s): Google

------
dblooman
If the question was, Why is kube getting so popular with developers, it might
get a different response. I wonder how many software developers come to
kubernetes through the templated/helm chart/canned approach made by there
DevOps team, not that this isn't a fine approach, but I find it a different
conversation to say, Serverless, where anyone can just jump in.

After spending 18 months working on bringing kubernetes(EKS) to production,
with dozens of services on it, the time was right to hand over migrating old
services to the software engineers who maintain them. Due to product demands,
but also some lack of advocacy, this didn't happen, with the DevOps folks
ultimately doing the migration and retaining all the kubernetes knowledge.

An unpopular opinion might be that Kubernetes is popular because it gives
DevOps teams new tech to play with, with long lead times for delivery given
its complexity. Kubernetes usually is a gateway to tracing, service meshes and
CRDs, which while you don't need at all to run Kubernetes, they will probably
end up in your cluster.

------
peterwwillis
Every person I know who wants to use k8s has never had to maintain it.

 _" Developers love it!"_ Yeah, I'd love someone to drive my car for me, too.
Doesn't mean it's a great idea to use technology so complex you have to hire a
driver (really several drivers) to use it.

If you already have 3 people working for you that (for example) understand
etcd's protocols or how to troubleshoot ingress issues or how to prevent (and
later fix) crash loops, maybe they can volunteer to babysit your cluster for
you, do all the custom integration into the custom APIs, keep it secure, etc.
But eventually they may get tired of it and you'll have to hire SMEs.

If you're self-hosting a "small" k8s cluster and didn't budget at least $500k
for it, you're making a mistake. There are far simpler solutions to just
_running a microservice_ that don't require lots of training and constant
maintenance.

Complexity isn't always bad, but unnecessary complexity always is.

------
kureikain
Before K8S, to run a service you need:

\- Setup VM: and their dependencies, tool chain. If you use thing like package
that has native component such as image processing you event need to setup
some compiler on the VM \- Deployment process \- Load balancer \- Systemd unit
to auto restart it. Set memory limit etc.

All of that is done in K8S. As long as you ship a Dockerfile, you're done.

~~~
takeda
I feel like you're simplifying things unnecessarily, all of the things you
mentioned you still configure, except the configuration is now in YAML.

------
jariel
How big does your company get before you need to step away from a tiny handful
of very large EC2s?

If you have 16CPU EC2 for your business logic, one for your DB, and you're
smartly hosting your static content elsewhere or via Cloudflare ... I mean you
need to have a 'big company' before going too far beyond that.

What gives? What are all these startups doing?

This is not a story about K8's, this is entirely something else, it's about
psychology, complexity, our love of it, or rather our 'belief' that complexity
= productivity that solving 'the hard infra problem' must inherently, be
somehow be 'good for the company' because it 'feels difficult' and therefore
must be doing something powerful or at least gaining some kind of competitive
advantage?

(Aside from the 'Docker is Useful and K8's follows' point which actually makes
sense a little bit ...)

------
lkrubner
I've tried to make the argument that Kubernetes introduces a level of
complexity that should make everyone think twice before diving into that eco-
system. I've tried to make this argument using both detailed, factual
arguments, and also by using humor and parody. I am confused why Kubernetes
has so much momentum, especially when you consider that most of the things we
want (isolation, security, dependency management, flexible network topologies)
can be gained much more simply with Terraform and Packer. With a mix of humor
and detailed factual analysis, my most recent attempt to make this argument is
here:

[http://www.smashcompany.com/technology/my-final-post-
regardi...](http://www.smashcompany.com/technology/my-final-post-regarding-
the-flaws-of-docker-kubernetes-and-their-eco-system)

------
gofreddygo
\- Its free

\- Most code running on k8s hasn't hit full production load yet.

\- Where it has worked well, its been managed by devs that know what they are
doing.

\- It something worth putting on a backed dev resume

\- Apparent cost saving ('we just need 1 vm instead of 5', 'we can auto scale
to infinity','we don't have yo pay for aws, we get it all on our own vms').

Wait a few months and we will see a slurry of posts that read 'why we moved
away from kubernetes', 'top 5 reasons to not use kubernetes', 'How using
kubernetes fucked us, in the ass', 'You dont need kubernetes', 'Why I will
never work on a project that uses kubernetes', 'Hidden costs of kubernetes'
and so on.

C'mmon, you know how this works. Just take the time and read the docs. They
are well written (They just don't mention where k8s does not work well)

~~~
gofreddygo
Exhibit 1 : Why Kubernetes is not part of our stack [1]

[1]:
[https://news.ycombinator.com/item?id=23460066](https://news.ycombinator.com/item?id=23460066)

------
StreamBright
Because developers are lazy.

I dont want do memory management-> gc

I dont want to do packaging -> Docker

I dont want to do autoscaling -> Kubernetes

------
mmcnl
Kubernetes is great, but also very complex and almost an entire new paradigma
to learn and understand. I feel like there's a huge void between no Kubernetes
and Kubernetes that isn't being filled yet. Dealing with and/or managing
Kubernetes is a task on its own, I have the feeling that container
orchestration doesn't have to be that complex.

Something like an easy to use (and operate!) multi-tenant docker-compose on
steroids with user management/RBAC and a built-in Docker image repository that
gets out of your way would be amazing for small teams / startups that don't
want to deal with the complexity of Kubernetes.

------
bg24
\- There are big names behind it. \- It will replace VM orchestration
platforms. \- Fear of missing out.

Jokes aside, when you've lots of teams, all working on small pieces of a large
product and shipping on their own, iterating fast... you need a platform and
ecosystem on top to meet their requirements. As you reach planet-scale, you
need to NOT let your cost grow exponentially. Hence it is popular.

What if you're not planet-scale? Well, it will still help (attract talent,
design for scale, better ecosystem etc.). Hence it is popular.

If you're building a business however, focus on business and time-to-market,
definitely not the infra, i.e. kubernetes.

------
wadkar
I think kubernetes is to Infra what RoR was to Web. Not necessarily in terms
of architectural style of MVC, but more towards standardization of similar
enough problems that can be put into a mutually agreed convention.

~~~
harpratap
[https://www.youtube.com/watch?v=ZqQTEdHVaCw](https://www.youtube.com/watch?v=ZqQTEdHVaCw)

------
znpy
in my opinion it kinda sets a common lingo between development people and
operations people.

operations details are hidden from developers and development details (the
details of the workload) are hidden from the operations engineers.

------
iudqnolq
I've been completely perplexed by how I might repeatably and reliably setup a
single DigitalOcean (or similar) server.

I can't just blow away the instance, make a new one with their API, and run a
bash script to set it up because I need to persist some sqlite databases
between deploys.

Nix looks promising, but also seems to be a lot to learn. I think I'd rather
focus on my app than learn a whole new language and ecosystem and way of
thinking about dependencies.

I don't think my needs are insane here, I'm surprised there seems to be no
infrastructure as code project for tiny infrastructures.

~~~
t_sawyer
I’m a huge K8s advocate but if all you’re looking for his a repeatable way to
spin up DO droplets then use a user-data script.

User data is a bash script that can be automatically run when the machine
first spins up.

You could pass that script via digital oceans cli or even a tool like
terraform.

------
javajosh
Easy: the perception of infinite overhead. The cloud itself (e.g. EC2) gives
you capacity-on-demand, but the glue to make all the nodes work together is
missing. And its a really hard problem, in general, because its distributed
systems. K8s fills that demand, or seeks to. The alternative is to roll-your-
own, which is possible but expensive, error-prone, and difficult to hire for.
(My last company discovered this to their detriment after sinking a LOT of
time into building out a really complex Salt+Vagrant+AWS solution, and then
decided to migrate to k8s).

------
hootbootscoot
Beats me. It doesn't correspond to either virtual server nor hypervisors. It
certainly doesn't correspond to real hardware. Cloud OS my butt... "Hey, let's
take a zillion commodity cloud provider instances running on hypervisors, then
install Ubuntu, then run Kubernetes on them, then run docker containers on
them and fiddle about all day with yaml trying to make internal networking do
insecure things to imitate real world infrastructure"

Just use Ansible if you miss YAML, and you can actually deploy to real
hardware.

------
arein2
It has most of the features needed.

Everyone was trying to make a system simple and adopted, but if you want it to
be adopted, it's going to need a lot of features. Also Google worked some real
magic in getting Kubernetes being supported by all the cloud providers.

It's a framework that will enable you to do what you want, while being the
standard.

You could write your script to do that in a simpler way, but most people
already know the standard and it's easier for everybody to understand
Kubernetes rather than your clever solution.

------
devin
I've seen people extolling the many benefits of Kubernetes. For those who are
all in, how does something like CDK compare?

------
gabordemooij
I served millions of users with single low end server. Kubernetes is just a
sign that people can't code anymore.

------
CuriousSkeptic
As a follow up question. I’ve been running on Azure Service Fabric for a
little over three years now and been quite happy with it so far.

But it doesn’t seem like it generates quite the same buzz as kunernetes. Not
even within the azure/win/.net part of the world.

So have anyone here worked with both and could share some experience?

------
shp0ngle
I learned to like kubes, but... _why on earth is it YAML_ :(

yaml is such a horrible format that I would even prefer JSON...

------
clvx
In a side note if you were to invest your time in writing operators, would you
use kubebuilder or operator-sdk?

~~~
vkat
Both use controller-runtime underneath so there is not much difference between
the two. I personally have used both and prefer kubebuilder

------
lyjackal
I think kubernetes is great conceptually if you're running on the cloud, but
it's a very complicated domain, and has a lot ecosystem churn. Things break a
lot if you're not careful. Upgrading dependencies is a constant pain.
Certainly a time such

------
andbot
This article adds nothing to common knowledge everyone with a bachelors degree
in computer science is completely aware of. Can anyone tell me what I missed?
Or where is the reason why this post is trending among people whose background
I cannot grasp?

------
pea
Do you guys think k8s is doing a job which previously the jvm did in
enterprise? i.e. if everything is on the jvm, building distributed systems
doesn't require a network of containers.

Can k8s success be explained partly due to the need for a more polyglot stack?

~~~
verdverm
How do you roll over a fleet of JVM applications with zero downtime and
maintain rollback revision history?

Is it as easy as two simple commands?

------
alexbanks
I thought it was pretty insane yesterday when I read a YC-backed recruiting
company was using Kubernetes. Absolutely insane. It's become the new, hottest,
techiest thing that every company has to have even when they don't need it.

~~~
Bob_LaBLahh
It's perfectly sane if their team already knows how to use K8s, especially if
they use a hosted solution like GKE or Digitalocean K8s. (I'll admit that I'd
never want to manage my own k8s cluster.)

Once you know K8s, it's not very difficult to use. Plus, it provides solutions
to a lot of different infrastructure-level problems.

------
LoSboccacc
the main thing I like about them: configuration. it's trivial to split
integration configuration from applicative configuration from deployment
configuration, it's trivial to version configurations,

it's not unique in what it does, but even with puppet and the likes you always
had this or that exception because networking, provider images varying selinux
defaults etc.

kuberent on it's own already covered most ground, but configmap and endpoints
really tie it together in a super convenient package

it's not without pitfalls, like ms aks steal 2gb from each node so you have to
be aware of that and plan accordingly, but still.

~~~
cinquemb
> it's not without pitfalls, like ms aks steal 2gb from each node so you have
> to be aware of that and plan accordingly, but still.

This is what I hate alot about things like k8, docker, etc is the memory
profile… pretty much makes it a non starter if you want to run it on anything
low cost.

------
kgraves
As a manager i've heard in all my meetings about 'kubernetes', had a look at
it and have always been questioning the cost to manage this.

What is the cheapest way to setup a production kubernetes on a cloud provider?

~~~
csunbird
Digital ocean has a managed kubernetes service that does not cost anything
except the resources you use. The master node and management is free, you only
pay the node pools and stuff like block storages (their version of EBS) or
load balancers.

~~~
frompdx
I have used DO for managed Kubernetes since it was available and I am very
happy with it.

------
Jestar342
One thing I've discovered when hiring people is if I'm not using things like
kubernetes, I don't get (as many) candidates apply. I also don't get as good
quality candidates, either.

------
ashtonkem
My opinion is that Kubernetes is the common integration point. Tons of stuff
works with Kubernetes without having to know about each other, making
deployments much much easier.

------
holidayacct
Because Google is an advertising company, their search engine controls what
people believe in and they also have some good engineers but they are probably
not well known. There is very little they couldn't advertise into popularity.
Whenever you see overcomplicated software or infrastructure its always a way
to waste executive function, create frustration and create unnecessary mental
overhead. If the technology you're using isn't making it easier for you to run
your infrastructure from memory, reduce the use of executive function and
decrease frustration then you should ignore it. Don't fall for the fashion
trends.

~~~
verdverm
Please don't criticize, condemn, or complain if you don't have anything
constructive to add.

~~~
holidayacct
I'm not criticizing. I've actually used kubernetes and read the source code.
It's a good tool, I just think its too much mental overhead for most companies
since they won't use most of what it provides. If you're working on a large
team with responsible parties who have a clearly defined roles it is a great
tool but I've seen two person projects with startup infrastructure waste
obscene amounts of time learning Kubernetes when they could have just stood up
something basic with configuration management to get started and migrate to
Kubernetes when it was reasonable to do so. People need to start with a goal
and then ask what tool meets the objectives of their goal. In a lot of cases
people complain about the tool they are using because they start with
Kubernetes and then try to figure out how they can use it on the job.

~~~
verdverm
Your point is do something simple because k8s is hard? 1) even small scale dev
teams an business still need non simple.software processes. 2) learning
Kubernetes is easier now than learning the underlying cloud. It's really about
all the other things k8s provides. Maybe you haven't seen it used in enough
contexts yet to appreciate those other benefits?

------
nova22033
This is the wrong question. The question should be why are containers so
popular? If you're going to use containers, kubernetes makes it easier to do
so.

------
kerng
Not a microservices guru, but why are big companies (most famously Uber, who
was sort of spearheading it famously) starting to abandon this architecture?

------
pjmlp
Somehow k8s capabilities look like a description of WebSphere feature list,
just done with cooler technology for younger generations.

------
tamrix
Some developers just refuse to admit they're are trends in development.

Kubernetes is popular because it's the new 'cool'.

------
semasad
I love to read when some tech we are using @ Nursoft.co about 1+ year is
"getting" popular, it's feels good.

------
gabordemooij
Kubernetes is popular because developers want names on their CVS. A couple of
shell scripts will get you anywhere.

------
hinkley
Am I the only one who noticed that 'Innovation' was by far the shortest
section of that article?

------
shakil
Call me biased [1] but K8s will take over the world! Yes you get containers
and micro-services and all that good stuff, but now with Anthos [2] its also
the best way to achieve multi-cloud and hybrid architectures. What's not to
like!

1\. I work for GCP 2\.
[https://cloud.google.com/anthos/gke](https://cloud.google.com/anthos/gke)

~~~
spyspy
Is there any benefit of Anthos over deploying straight to GKE if you're
already bought into GCP? We've had this debate several times recently and
can't come up with a good answer.

~~~
shakil
If you are bought in to GCP and plan to stay there, then maybe not much. OTOH,
Anthos would allow you to do easier migrations from on-prem, support hybrid
workloads, or consolidate multi-cloud clusters including those running on say,
AWS [1] if you like.

1\. [https://cloud.google.com/blog/topics/anthos/multi-cloud-
feat...](https://cloud.google.com/blog/topics/anthos/multi-cloud-features-
make-anthos-on-aws-possible)

------
yalogin
Google created it but did they get any benefit from it? Did it help in getting
any business for GCP?

~~~
jonahbenton
Two different questions.

To the first- yes, enormously so. If you know your history, it is the Linux to
the Microsoft that is AWS- except backed by a business. (Google is maybe
RedHat in that story, but the analogy is more inaccurate than accurate).

To the second, not really. GCP is mostly turning into an ML play.

------
AzzieElbab
Because the demos are awesome and there is a lot of money to be made in
getting it beyond demos

------
JackRabbitSlim
I get the feeling K8 is the modern PHP. Software that's easy to pick up and
use without complete understanding and get something usable. Even if its not
efficient and results in lots of technical debt.

And like PHP, it will be criticised with the power of hind sight but will
continue to be used and power vast swaths of the internet.

~~~
iso-8859-1
But languages are easy, there is the whole field of PL theory to draw from. If
you're randomly throwing things together like Lerdorf was, there's a missed
opportunity.

But what is the universally regarded theory that k8s contradicts? I don't
think there is one.

~~~
p_l
In fact, I'd say that k8s is unusually heavily stepped in high-brow theories
from both engineering and AI space. Just not necessarily ones that enjoy hype
right now.

The storage of apiserver essentially works as distributed Blackboard in a
"Blackboard System", with every controller being an agent in such a system.
Meanwhile the agents themselves approach their tasks from control theory areas
- oft used comparison is with PID controllers.

------
claytongulick
I completely understand the use case for Kubernetes when you're dealing with
languages that require a lot of environment config, like Python.

I've never really thought it was that useful for (for example) nodejs, where
you can just npm install your whole environment and deps, and off you go.

~~~
frompdx
I have mostly used Kubernetes for Node.js apps and find it very useful for the
following reasons:

\- Automatic scaling of pods and cluster VMs to meet demand.

\- Flexible automated process monitoring via liveness/readiness probes.

\- Simple log streaming across horizontally scaled pods running the same
app/serving the same function using stern.

\- Easy and low cost metrics aggregation with Prometheus and Grafana.

\- Injecting secrets into services.

I'd imagine there are other things can offer the same, but I find it
convenient to have them all in the same place.

------
thisisnotmy
Because it’s basically turning things off and on. At scale.

------
kakoni
So anybody doing k8s onprem? How is it going/working?

------
maxdo
if you're on microservices, it's no brainer. You'll need an army of DevOps
with semi-custom scripts to maintain the same. It's really automating a lot of
stuff. Helm + Kubernetes let our company's ability to launch microservices
with no DevOps involved. You just provide the name of the project, push to git
and GitLab CI will pick it up and do the stuff by the template. Even junior
developers in our team are doing that from day one. Isn't that a future we
dream about? If you have too much load it will autoscale pod, if node is
overloaded it will autoscale node pool, if you have a memory leak it will
restart the app so you can sleep at night. I can provide a million examples
that make our 100+ microservices management so much simpler. No Linux kungfu,
0 bash scrips, no SSH, and interaction with OS, not a single devops role for
15+ developers team.

Our management of cluster is just a simple "add more CPU or memory to this
nodepool", sometimes change a nodepool name for deployment for certain
service. All done simple cloud management UI. For those who call microservices
fancy stuff. No, we are a startup with fast delivery, deploy cycle. We have
tons of subproject , integrations, and our main languages are nodejs, golang
and python. Some of these are not good at multi-thread so no way to run it as
a monolith. The other one is used only when it's needed for high performance.
So All together Microservices + Kubernetes + Helm + good CI + proper pubsub
gives our backend extremely simple fast cycle of development, delivery, and
what's important flexibility in terms of language/framework/version.

What is also good is the installation of services. With helm I can install
high availability redis setup for free in 5 minutes. The same level of setup
will cost you several thousand dollars for devops work and further maintenance
and update. With k8s it's simple helm install stable/redis-ha

So yeah, I can totally understand some simple projects don't need k8s. I can
understand you can build something is Scala and Java slowly but with high
quality as a monolith. You don't need k8s for 3 services. I can understand
some old DevOps don't want to learn new things and they complain about a tool
that reduces the need of these guys. Otherwise, you really need k8s.

~~~
p_l
I will happily use k8s for that big monolith.

Because soon from one program on a dev server, there is a need to run
databases, log gathering, multiply the previous to do parallel testing in
clean environment, etc. etc.

Just running supporting tools for a small project where there was insistence
on self-hosting open source tools instead of throwing money at slack and the
like? K3s would have saved me weeks of work :|

------
fmakunbound
It gets more popular mostly because it's popular.

------
alec_kendall
This seems appropriate...
[https://microservices.io/](https://microservices.io/)

------
jonahbenton
Obligatory- the best introduction to Kubernetes, from conceptual perspective,
is Google's incredible Borg paper:

[https://static.googleusercontent.com/media/research.google.c...](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf)

------
PanosJee
Religion

------
zelphirkalt
In this post it might only be an example, but I don't see anything, that
necessitates the use of YAML. All of that could be put in a JSON file, which
is far less complex.

YAML should not even be needed for Kubernetes. Configuration should be
representable in a purely declarative way, instead of making the YAML mess,
with all kinds of references and stuff. Perhaps the configuration
specification needs to be re-worked. Many projects using YAML feel to me like
a configuration trash can, where you just add more and more stuff, which you
haven't thought about.

I once tried moving an already containerized system to Kubernetes for testing,
how that would work. It was a nightmare. It was a few years ago, maybe 3 years
ago. Documentation was plenty but really sucked. I could not find _any_
documentation of what can be put into that YAML configuration file, what the
structure really is. I read tens of pages of documentation, none of it helped
me to find, what I needed. Then even to set everything up, to get the
Kubernetes running at all also took way too much time and 3 people to figure
out and was badly documented. It took multiple hours on at least 2 days.
Necessary steps, I still remember, not being listed on one single page in any
kind of overview, but somewhere a required step was hidden on another
documentation page, that was not even mentioned in the list of steps to take.

Finally having set things up, I had a web interface in front of me, where I
was supposed to be able to configure pods or something. Only, that I could not
configure everything I had in my already containerized system, via that web
interface. It seems that this web interface was only meant for the most basic
use cases, where one does not need to provide containers with much
configuration. My only remaining option was to upload a YAML file, which was
undocumented, as far as I could see back then. That's were I stopped. A
horrible experience and I wish not to have it again.

There were also naming issues. There was something called "Helm". To me that
sounds like an Emacs package. But OK I guess we have these naming issues
everywhere in software development. Still bugs me though, as it feels like
Google pushes down its naming of things into many people's minds and sooner or
later, most people will associate Google things with names, which have
previously meant different things.

There were 1 or 2 layers of abstraction in Kubernetes, which I found
completely useless for my use-case and wished they were not there, but of
course I had to deal with them, as the system is not flexible to allow me to
only have layers I need. I just wanted to run my containers on multiple
machines, balancing the load and automatically restarting on crashes, you
know, all the nice things Erlang offers already for ages.

I feel like Kubernetes is the Erlang ecosystem for the poor or uneducated,
who've never heard of other ways, with features poorly copied.

If I really needed to bring a system to multiple servers and scale and load
balance, I'd rather look into something like Nomad. Seems much simpler and
also offers load balancing over multiple machines and can run docker
containers and normal applications as well, plus I was able to set it up in
less than an hour or so, having to servers in the system.

~~~
kinghajj
You absolutely can use just JSON with Kubernetes and not YAML. The K8s backend
services store configuration in JSON and the API protocols use JSON. There's
even a K8s configuration management tool called Ksonnet that uses an extended,
JSON-like language with full program-ability, instead of the template mess of
Helm charts.

------
courtf
I honestly couldn't tell you.

What I can tell you, is that the unbelievable bloat in the complexity of our
systems is going to bite us in the ass. I'll never forget when I joined a hip
fintech company, and the director of eng told us in orientation that we should
think of their cloud of services as a thousand points of light, out in space.
I knew my days were numbered at exactly that moment. This company had 200k
unique users, and they were spending a million dollars a month on CRUD.
Granted, banking is its own beast, but I had just come from a company of 10
people serving 3 million _daily_ users 10k requests a second for images drawn
on the fly by GPUs. Our hosting costs never exceeded 20k per month, and the
vast majority of that was cloudflare.

Deploying meant compiling a static binary and copying it to the 4-6 hardware
servers we ran in a couple racks, one rack on each side of the continent. We
were drunk by 11am most of the time.

Today, it's apparently much more impressive if you need to have a team of
earnest, bright-eyed Stanford grads constantly tweaking and fiddling with 100
knobs in order to keep systems running. Enter kubernetes.

~~~
hyperbovine
> I'll never forget when I joined a hip fintech company, and the director of
> eng told us in orientation that we should think of their cloud of services
> as a thousand points of light

Let's be real, if you are old enough to get that reference without Googling,
you probably would not have lasted that long at a hip fintech company anyways
:-P

~~~
falcolas
George Sr's term wasn't that long ago.

... was it?

~~~
jsjohnst
32 years ago, so I dunno, is that “long ago” to you?

------
hardwaresofton
tl;dr - Kubernetes is a good tool, but it has been marketed and evangelized to
where it is today, it's meteoric rise is not organic.

I am a huge Kubernetes fan, and think that it is a good and necessary tool
with little accidental complexity (most concepts are there because you will
likely need them and/or that they are a valid concern), but my position is
that the growth of Kubernetes has _not_ been organic -- it's been heavily
promoted and marketed and pushed to where it is today.

Let's compare a project like Ansible first release in 2012[0], and the first
AnsibleFest is in 2016[0]. Ansible is a very useful abstraction/force
multiplier for doing ops. If a dedicated conference is a measure of
community/enthusiasm reaching a fever pitch, it took 4 years for Ansible to
reach critical mass. Kubernetes had it's first Kubecon in 2015[1] ONE year
after it's initial release in 2014[2]. Did it reach critical mass 4x quicker
than ansible? Maybe, but I think the simpler explanation is that the people
who want Kubernetes to succeed know that creating buzz and the _appearance_ of
widespread adoption and community is more important than it actually being
there, as it becomes a self-fulfilling prophecy. Once you have enough
onlookers, people motivated to work on open source (i.e. give away labor, time
and energy for free) will come improve your project with you, serve as an
initial user base, your biggest promoters, all the while strengthening your
ecosystem.

Another interesting side to this is how thoroughly Kubernetes _seems_ to be
crushing it's competition -- DC/OS (Mesos), Nomad and other competition are
not fighting a functionality war, they're fighting a marketing war. DC/OS and
Nomad are not obviously worse in function, but certainly don't compare when
you consider ecosystem size (perceived, if not actual) and brand. It's a
winner-take-most scenario and tech companies are particularly good at seizing
this kind of opportunity. Of course, if you compare the resources of the
entities backing these projects, it's clear who was going to win the marketing
war.

In a world of free tiers as a good way to get people locked in, developer
evangelists who build essentially propaganda projects (no matter how cool they
are), and shrinking attention spans, Kubernetes is a good tool which has
marketed itself to greatness. In it's wake there are efforts like the CNCF
which I struggle to characterize because it's hard to differentiate their
efforts to standardize from an effort to bureaucratize. I'm almost certainly
blinded by my own cynicism but most of this just doesn't feel organic. Big,
useful open source software gets world-renowned after years/decades of being
convenient/useful/correct/etc but Kubernetes (and other projects given the
CNCF gold star) seem to be trying to skip this process or at least bootstrap a
reputation out of the gate.

DevOps traditionally moved much slower -- I can remember what seemed like an
age of "salt vs ansible vs chef", with all three technologies having had lots
of times to prove themselves useful. Even the switch to containers instead of
VM/user based process isolation took more time than Kubernetes has taken to
dominate the zeitgeist.

[0]:
[https://en.wikipedia.org/wiki/Ansible_(software)](https://en.wikipedia.org/wiki/Ansible_\(software\))

[1]: [http://www.voxuspr.com/2019/03/what-is-kubecon-its-past-
pres...](http://www.voxuspr.com/2019/03/what-is-kubecon-its-past-present-and-
future)

[2]:
[https://en.wikipedia.org/wiki/Kubernetes](https://en.wikipedia.org/wiki/Kubernetes)

------
foobar_
The arguments used to consistently market software

1\. It's portable

2\. It's fast

3\. It's declarative

4\. It's fun / productive / easy

5\. It's safe / automatic

6\. It's an integrated framework

The opposites are also used to detract competitors.

The idea of k8 is that it will be portable to all hosting providers and linux
distributions as opposed to developing shell scripts for Red Hat, especially
multiple versions. I don't think it's easy or fun or fast.

------
battery_cowboy
Because everyone chases the newest, shiniest thing in tech, and it's not cool
nor fun to make boring old stuff in C then copy one binary and maybe a config
to the server.

~~~
mwcampbell
Even if one does have a single binary and config file that one can just copy
to a server and run, there's more to non-trivial deployments than that. For
example, how do you do a zero-downtime deployment where you copy over a new
binary, start it up, switch new requests over to the new version, but let the
old one keep running until either it finishes handling all requests that it
already received or a timeout is reached? One reason why Kubernetes is popular
is that it provides a standard, cross-vendor solution to this and other
problems.

~~~
battery_cowboy
Most web applications don't need any of that. Also, I didn't say k8s was
useless, just that it's the new thing everyone wants (that they probably don't
need).

~~~
mwcampbell
I disagree about what most web applications need. It's not the 90s anymore.
Everyone expects zero downtime _and_ frequent updates.

