
The future of Kubernetes is virtual machines - MordodeMaru
http://tech.paulcz.net/blog/future-of-kubernetes-is-virtual-machines/
======
amscanne
An important element of Kubernetes is that it standardizes the infrastructure
control plane, and allows different pieces to be plugged in (for networking,
storage, etc.).

The "virtual kubelet" essentially throws that all that away and keeps
Kubernetes "in API only". For example, with virtual kubelets, scheduling is
meaningless and networking and storage are restricted to whatever the virtual
kubelet target supports (if useable at all).

Personally, I think the value proposition is tenuous -- you can create VMs
today, doing so via the Kubernetes API isn't suddenly revolutionary. Just like
throwing something in a "hardware virtualized" instance doesn't suddenly make
the whole system secure.

Containers and Kubernetes are compelling for a variety of reasons, improving
them to handle multi-tenancy is a broad challenge but I don't think the answer
is to reduce the standard to what we have today (a bunch of disparate VMs).

~~~
jacques_chester
> _Personally, I think the value proposition is tenuous_

Multi-tenancy is a pretty compelling value proposition when you reach any kind
of scale. If you're in a regulated sector, it's non-negotiable.

Relying on the cluster as the security boundary is very effective ... and very
wasteful.

> _Containers and Kubernetes are compelling for a variety of reasons,
> improving them to handle multi-tenancy is a broad challenge but I don 't
> think the answer is to reduce the standard to what we have today (a bunch of
> disparate VMs)._

I think the argument is that rather than the painful (and it will be _very_
painful) and probably incomplete quest to retrofit multi-tenancy into a
single-tenancy design, we can introduce multi-tenancy where it basically
actually matters: at the worker node.

At first glance it's confusing to go from "one master, many nodes" to "one
node pool, many masters". But it actually works better on every front.
Workload efficiency goes up. Security surface area between masters becomes
close to nil.

Very cheap VMs are the means to that end.

Disclosure: I work for Pivotal and this argument fits our basic doctrine of
how Kubernetes ought to be used.

~~~
davidopp__
I don't think multi-tenancy has been "retrofitted" onto Kubernetes. Kubernetes
was designed with multi-tenancy in mind from the very early releases --
namespaces, authn/authz (initially ABAC, later RBAC), ResourceQuota,
PodSecurityPolicy, etc. New features are added over time, such as
NetworkPolicy (which has been in Kubernetes for a year and a half, so perhaps
not "new" anymore!), EventRateLimit, and others, but always in a principled
way. And the integration of container isolation technologies like gVisor and
Kata are using a standard Kubernetes extension point (the Container Runtime
Interface) so I do not view this work as retrofitting.

Moreover, even today there are real public PaaSes that expose the Kubernetes
API served by a multi-tenant Kubernetes cluster to mutually untrusting end-
users, e.g. OpenShift Online and one of the Huawei cloud products (I forget
which one). Obviously Kubernetes multi-tenancy isn't going to be secure enough
today for everyone, especially folks who want an additional layer of isolation
on top of cgroups/namespaces/seccomp/AppArmor/etc., but there are a lot of
advantages to minimizing the number of clusters. (See my other comment in this
thread about the pattern we frequently see of separate clusters for dev/test
vs. staging vs. prod, possibly per region, but sharing each of those among
multiple users and/or applications.)

Disclosure: I work at Google on Kubernetes and GKE.

~~~
jacques_chester
> _I don 't think multi-tenancy has been "retrofitted" onto Kubernetes.
> Kubernetes was designed with multi-tenancy in mind from the very early
> releases -- namespaces, authn/authz (initially ABAC, later RBAC),
> ResourceQuota, PodSecurityPolicy, etc._

My complaint is that these require assembly and are in many cases opt-in
(making RBAC opt-out was a massive leap forward).

Namespaces are the lynchpin, but are globally visible. In fact an enormous
amount of stuff tends to wind up visible in some fashion. And I have to go
through all the different mechanisms and set them up correctly, align them
correctly, to create a firmer multi-tenancy than the baseline.

Put another way, I am having to construct multi-tenancy inside multiple
resources at the root level, rather than having tenancy _as_ the root level
under which those multiple resources fall.

> _there are a lot of advantages to minimizing the number of clusters._

The biggest is going to be utilisation. Combining workloads pools variance,
meaning you can safely run at a higher baseline load. But I think that can be
achieved more effectively with virtual kubelet .

~~~
davidopp__
> The biggest is going to be utilisation. Combining workloads pools variance,
> meaning you can safely run at a higher baseline load.

Utilization is arguably the biggest benefit (fewer nodes if you can share
nodes among users/workloads, fewer masters if you can share the control plane
among users/workloads), but I wouldn't under-estimate the manageability
benefit of having fewer clusters to run. Also, for applications (or
application instances, e.g. in the case of a SaaS) that are short-lived, the
amount of time it takes to spin up a new cluster to serve that application
(instance) can cause a poor user experience; spinning up a new namespace and
pod(s) in an existing multi-tenant cluster is much faster.

> But I think that can be achieved more effectively with virtual kubelet .

I think it's hard to compare virtual kubelet to something like Kata
Containers, gVisor, or Firecracker. You can put almost anything at the other
end of a virtual kubelet, and as others have pointed out in this thread
virtual kubelet doesn't provide the full Kubelet API (and thus you can't use
the full Kubernetes API against it). At a minimum I think it's important to
specify what is backing the virtual kubelet, and what Kubernetes features you
need, in order to compare it with isolation technologies like the others I
mentioned.

Disclosure: I work at Google on Kubernetes and GKE.

~~~
rbanffy
One trick I used before was to create resources and leave them unused until
they are allocated, at which point I create another one to top off the pool of
pre-created resources. A stopped cluster takes up disk space and nothing else
and this is an easy solution to the user experience issue.

Of course, hardening multi-tenant clusters is also needed. Even if the use
case requires resource partitioning, there are use cases that don't and
keeping one friend from stepping on another's toes is always a good idea.

------
mcguire
As someone who has been a sysadmin and system programmer for 25 years and a
crusty rat bastard for almost that long, I have to wonder how long it is going
to be until someone realizes that a piece of hardware doing a task is more
efficient than 27 layers of virtual machines and a long pair of tongs.

~~~
nimbius
amen. At first i figured containers were a good way to run code with difficult
dependencies and weird operating environments, but that still doesnt make
sense.

\- systemd and uwsgi for example play well when run as a single user per wsgi
application under a single nginx/lb.

\- php-fpm already handles a ton of overhead from php apps.

\- ansible deployments called from gitlab-ci can roll out apps just as well as
deploying from the registry.

then i figured maybe they were on to something with the autoscaling
thing...but that seems like a meaningless feature. Every good project already
has metrics and forecasting...it would be absurd to think a final product like
imgur.com or twitter does not know (down to the byte) how much storage theyll
need in 4 months and the potential drivers.

auto-scaling infrastructure just betrays the fact that most developers throw
resources at load problems instead of waiting for ops to figure out the actual
issue.

~~~
latchkey
I helped build an e-commerce marketplace, which is essentially a shopping cart
system that was used by third parties. We had no idea how to predict traffic
loads because we had no idea when one of our customers would run a successful
campaign. Autoscaling on google appengine was a lifesaver for us. Our very
small dev team focused on building features because we had zero devops and we
didn't have to carry a pager.

------
trjordan
This is, like all good HN articles, technically correct and practically
incorrect.

It is correct that containers leak, and people know this. Multi-cluster
strategies are real, and they shouldn't be. It should be OK to have one big
cluster[1]. Until Kubernetes fixes this, there will be some friction to adopt
it, based on real use cases like untrusted code and noisy neighbors.

It is incorrect because users (e.g. non-infrastructure engineers) don't know
or care about the precise definition of containers and VMs are. The point of
"containers" is that I can define something that acts like an operating system
from the ground up, and it builds quickly and runs quickly in production.

Kubernetes doesn't win by forcing users to think about VMs. Kubernetes wins by
adopting a VM standard that can be built by Dockerfiles. Infra engineers will
love it.

But besides them? Nobody will care, because Docker for Mac will look the same.

[1] Maybe 1 cluster per region? There's a whole fascinating topic that starts
with the question "when building a PaaS, do you expose region placement to
devs?" The answer implies a ton of stuff about what exactly it's reasonable to
expect from a PaaS and how much infrastructure your average dev has to know.

~~~
georgebarnett
> It should be OK to have one big cluster

Assuming you're deploying your Kube cluster in the cloud, the costs of having
multiple clusters is really reduced. You don't have to allocate physical
machines or worry about utilisation as much - you just pick a node size and
autoscale.

What that enables is thinking about other concerns when deciding how many
clusters and where they are is right for your team.

There are operational reasons why having multiple clusters is a good idea. At
the simplest level, making a config change and only risking a portion of the
infrastructure is an example.

~~~
davidopp__
A pattern we're seeing a lot of recently is one cluster per "stage" per
region, where a "stage" is something like dev/test, canary, and prod. (In some
cases only prod is replicated across multiple regions.) I think this may end
up being the "sweet spot" for Kubernetes multi-tenancy architecture. The
number of clusters isn't quite at the "Kubesprawl" level (I love that phrase
and am absolutely going to steal it) -- you can still treat them as pets. But
you get good isolation; you can limit access to the prod clusters to only the
small set of folks (and perhaps the CD system) authorized to push code there,
you can canary Kubernetes upgrades on the canary cluster(s), etc.

As an aside, something that's useful when thinking about Kubernetes multi-
tenancy is to understand the distinction between "control plane" multi-tenancy
and "data plane" multi-tenancy. Data plane multi-tenancy is about making it
safe to share a node (or network) among multiple untrusting users and/or
workloads. Examples of existing features for data plane multi-tenancy are
gVisor/Kata, PodSecurityPolicy, and NetworkPolicy. Control plane multi-tenancy
is about making it safe to share the cluster control plane among multiple
untrusting users and/or workloads. Examples of existing features for control
plane multi-tenancy are RBAC, ResourceQuota (particularly quota on number of
objects; quota on things like cpu and memory are arguably data plane), and the
EventRateLimit admission controller.

There's active work in the Kubernetes community in both of these areas; if
you'd like to participate (or lurk), please join the kubernetes-wg-multi-
tenancy mailing list: [http://groups.google.com/forum/#!forum/kubernetes-wg-
multite...](http://groups.google.com/forum/#!forum/kubernetes-wg-multitenancy)

Also, I gave a talk at KubeCon EU earlier this year that gives a rough
overview of Kubernetes multi-tenancy, that might be of interest to some folks:
[https://kccnceu18.sched.com/event/Dqvb?iframe=no](https://kccnceu18.sched.com/event/Dqvb?iframe=no)
(links to the slides and YouTube video are near the bottom of the page)

Disclosure: I work at Google on Kubernetes and GKE.

~~~
georgebarnett
Your experience mirrors what I've seen.

Many teams use clusters for stages because they work on underlying cluster
components and need to ensure they work together and upgrade processes work
(e.g. terraform configs comes to mind). Theres no reason to separate accounts
because the cluster constructs aren't there for security.

Considering it deeper (I haven't had to think about this for a while), I think
multi tenancy would cover almost all of the use cases I've seen except for the
platform dev where people use clusters for separation when testing cluster
config-as-code changes.

------
jchw
"virtual machines" may be the wrong word. Containers is probably still the
right word. What we probably want, is actually isolated containers.

And hell, you can nearly get that today. Combining Docker with gVisor is a
potential solution to the soft tenancy problem as far as I can tell, and
Kubernetes supports using it.

(And gVisor is by no stretch of the imagination a 'VM' \- it is, at best, a
tiny hypervisor, and maybe less than that.)

------
mattbillenstein
Dammit, most of you reading this do not need any of this shit! Build simple
things!

~~~
TuringTest
You don't need them until you're hired by a client with nation-wide deployment
requirements, and then you need them.

~~~
aprdm
Not necessarily. Needless to say containers and kubernetes are pretty new
technologies and nation wide deployment requirement existed before docker /
k8s.

Most of the old big companies aren't using docker or k8s for their core
services. They're all using legacy fat apps that are load balanced in
baremetal or vms.

~~~
TuringTest
I doubt a home-made solution to load balancing will be seen as _simpler_ ,
which is what's being argued by the comment I replied to.

The point isn't that you need Kubernetes specifically, it's that requiring a
system that scales well is not as uncommon as the OP puts it.

~~~
mattbillenstein
k8ts is not the only avenue to building scalable systems -- scaling is mostly
about architecture, not how you run your software.

~~~
TuringTest
> k8ts is not the only avenue to building scalable systems

I had already conceded that. What's your point?

------
013a
I really worry about cost in this future of leaning on something like
ACI/Fargate to actually run the containers.

An m5.large instance (2vcpu/8gb) costs $70/mo on-demand ($44/mo with a 1 year
reservation). A similar Fargate runtime costs $146/mo.

A b2ms Azure instance (2vcpu/8gb) costs $60/mo on-demand ($39/mo 1 year
reservation). Azure Container Instances at a similar provisioning level costs
$176/mo by my calculations.

That's not a small difference. That's, like, 3x.

Point being, I love virtual-kublet from the perspective of a scale-up just
trying to get a product out the door. But for established companies, I still
think the core idea of a container on a VM you control is going to rule.
Fortunately Kubernetes allows amazingly easy flexibility to switch, and that's
a reason why it might be the most important technology created in recent
history.

~~~
hedora
Also, the instance prices are already some crazy multiple of buying a machine,
racking it, and running dozens of VM’s on it for 5 years.

I don’t know the prices for hyperconverged or converged infrastructure off the
top of my head, but doubt the amazon rates are even close to competitive
unless you’re at a tiny scale (or need a ton of tiny presences in different
regions).

------
orliesaurus
This feels like an infinite russian doll problem, putting k8s inside a bunch
of vms I mean.. but I am glad that while everyone is still trying to learn how
to properly do k8s these folks are thinking of the next thing

~~~
dehrmann
And running a Java services in the JVM. It's not real infrastructure unless
you have at least four layers of VMs.

------
old-gregg
Kubernetes has a real chance to succeed where OpenStack failed. Good people at
AWS have good reasons to be worried and will push us to a proprietary form of
"serverless".

~~~
lykr0n
OpenStack's failure is that it's an extremely complex system that looks to be
nearly impossible to deploy by yourself. You have to use a Distro that does
everything for you- their way.

Kubernetes is simple enough to setup yourself. They have well documented
tooling, and a solid do it yourself guide. OpenStack has none of that (that I
can find). You select RedHat Openstack (RDO) or Canonical OpenStack (MAAS),
and you have to use their all-in-one system to have a deployment- and that
requires a narrow set of variables which every environment might not have.
Which is insane- and will hinder adoption.

EDIT. Not 100% correct, see below comments.

~~~
serverascode
You don't have to use a distro. There are well tested puppet modules, which
you could make into your own "distro", as well as openstack-ansible and kolla,
and openstack-helm which uses helm to deploy openstack. There is also
StarlingX. There is no kubeadm like system however, though I'm not sure how
many people will really use kubeadm in prod.

Is it complex? Yes. Is it more complex than k8s? Probably. However, are there
multiple open source distros outside of RDO (which is not actually a distro
and is instead packaging--see tripleo for a distro like solution based on
RDO). MaaS is not an OpenStack distro; it's a way to manage baremetal nodes
that OpenStack is then deployed onto using other Canonical related tools. That
said, selecting a way to deploy and manage openstack is complex, but the same
with k8s.

~~~
lykr0n
True. My only experience is with VMware OpenStack, and my quick googling
didn't turn up much info. I think Kubernetes will fall to the same fate as
OpenStack is sliding into. Growing complexity with promises of the world. Time
will tell.

You seem to know more than I do, so I got to ask. Why does openstack-helm
exist? Why would anyone want to deploy OpenStack on top of kubernetes? Is it
so you can have the OpenStack API run in Kubernetes that manages physical
boxes?

~~~
halbritt
One of the issues with Openstack is managing the services required to run it.
I imagine that helm could be used to make it easier to run these services in
kubernetes.

~~~
lykr0n
True, but that adds another layer of complexity to an already complex system.
If I'm using the ansible module, that's complex enough. Throwing in management
of Kubernetes and the additional cruft containers adds- it seems like it's a
lot of hassle for little gain.

~~~
halbritt
I see this argument relatively frequently. "Why use Kubernetes when I can
accomplish X with [puppet|chef|ansible]".

The answer is that k8s offers something fundamentally different and until the
person posing the question gets that distinction, the argument is relatively
pointless.

I'm not just bowing behind the argument that "you just don't get it... man".
Let me point out that you're right. You can manage the openstack control plane
perfectly well with your configuration management tool of choice and if you
have that process really dialed, then you'll have a difficult time improving
upon it with something like k8s.

------
stevenacreman
It's still containers. On a cloud provider the Kubernetes workers are VM's
which orchestrate containers. With Kata Containers you're just spawning
containers inside micro-vm's.

~~~
jacques_chester
> _It 's still containers._

The security profiles of containers and VMs, including kernel-based VMs, are
different. VMs still have a significant edge, because the attack surface is
smaller and doesn't have many competing missions.

~~~
cyphar
The attack surface of a container can be massively reduced with seccomp
profiles -- there was a paper a few years ago which found that the effective
attack surface of a hypervisor was about the same as the attack surface of a
locked-down seccomp profile of a container (and LXC/Docker/etc already have a
default whitelist profile which has in practice mitigated something like 90%
of kernel 0days).

And let's not forget the recent CPU exploits which found that VMs aren't very
separated after all.

The fact that Kubernetes disables this (and other) security features by
default should be seen as a flaw in Kubernetes. (Just as some of the flaws of
Docker should be seen as Docker flaws not containers-in-general flaws.)

~~~
jacques_chester
> _The attack surface of a container can be massively reduced with seccomp
> profiles_

Yes, though as capabilities are added to the kernel, the profiles have to be
updated.

That said, VM or no VM, this should be done no matter what.

> _And let 's not forget the recent CPU exploits which found that VMs aren't
> very separated after all._

This is a nil-all draw in terms of the respective security postures, though.

------
state_less
While the use of namespaces seemed a bit discounted in the article, I think
part of the problem is that it's an underutilized and poorly conveyed feature,
but maybe that's just me? I'd like to see a better out-of-the-box vanilla K8S
user management interface in the dashboard. Maybe the default UX on cluster
creation is a user/service account that isn't cluster admin that is limited to
1 or more namespaces. There should be a better dashboard to configure and
create users that is front and center when you first create a cluster. You
have to work a bit to get something you can use to authenticate as cluster
admin like role. This should help direct people towards creating more users
and isolated namespaces.

~~~
halbritt
My org is using namespaces for tenant isolation. Works quite well.

~~~
geerlingguy
Namespaces++. Unless you’re just running one app in a Kubernetes cluster
(where default == that app), namespaces solve a plethora of problems.

~~~
halbritt
Resource limits, network policy boundaries, etc.

~~~
smarterclayton
That’s what we designed them for :)

I do think we’ve not explored enough of the per namespace policy stuff though
- i’d like both podpreset and a reasonably simple scheduling policy
(toleration + node selector control to replace the annotation based system) to
make it in, as well as a simpler namespace initialization path so you can more
easily lock down the contents of a namespace without having to proxy the
create namespace API call.

~~~
halbritt
By podreset are you referring to something like poddisruptionbudget?

Because it'd be neat to define that on a per namespace basis.

~~~
smarterclayton
PodPreset went alpha with service catalog but hasn’t made it out to beta yet.
It makes certain forms of injection / rules easier (you must use a standard
log dir, you should use the provided HTTP_PROXY vars, etc).
[https://kubernetes.io/docs/concepts/workloads/pods/podpreset...](https://kubernetes.io/docs/concepts/workloads/pods/podpreset/)

Being able to limit a user to only being able to edit one pod preset or
scheduling policy (via rbac name access) would provide some useful flexibility
for splitting control between admin and namespace user.

~~~
halbritt
Okay, I get what your point.

I'm still living in the world where operating a platform for the benefit of a
set of developers entails building and operating a set of services that
abstracts the details of the infrastructure sufficiently that these things
don't matter.

------
markbnj
I get the multi-tenancy argument, for certain use cases. I'm not sure I
understand the point about greater resource utilization. Presumably they mean
workloads can be scheduled more densely given a set of hardware resources, vs.
containers... but I'd think that containers would score better on that metric
than VMs. Can someone expound?

The best part of the piece for me was "kubesprawl," my new favorite word for
the week. We've seen it ourselves to some extent, but we are at least aware of
it and try to exert some pressure in the other direction. Beyond that I am not
particularly bothered by the idea of running lots of clusters for different
purposes.

------
emmelaich
I'm picking the eyes out the article here but I had to twitch at this ...

> _Linux containers were not built to be secure isolated sandboxes (like
> Solaris Zones or FreeBSD Jails). Instead they’re built upon a shared kernel
> model ..._

Solaris Zones and BSD Jails both use a shared kernel.

And I'd bet you they were far from perfect in security isolation.

Now it may be true that security wasn't Linux containers prime reason for
being but we have an existence proof that they can be made secure enough --
anyone can get a trial Openshift container for the price of a login.

------
tybit
_As 2018 comes to a close its time to drag out the hubris and make a bold
prediction. The future of Kubernetes is Virtual Machines, not Containers._

I’d say that’s less of a prediction than a matter of fact since it’s already
happened in 2018 for AWS and GCP.

This post is kind of irrelevant for consumers imo, in that the future of
Kubernetes is still the container interface, regardless of whether your vendor
decides to run it in a container or a VM.

~~~
halbritt
How has it already happened? It's true that EKS and GKE nodes are virtual
machines, but that's entirely beside the point of this article which seems to
make the point that the workloads running inside of Kubernetes will also be
virtual machines.

Interesting notion, but I don't see it. The reason for kubesprawl as it is
today is a result of the fact that today Kubernetes is hard to tune for
disparate workloads which leads most folks to just punt and stand up multiple
clusters.

That said, people are starting to figure it out and more tools like the
vertical pod autoscaler are coming. Eventually the more efficient choice will
be to run disparate workloads across the same set of hardware.

------
elsonrodriguez
The future of Kubernetes (for some interesting use cases), is virtual
machines. For the rest of us the virtual kubelet project represents a bridge
between Kubernetes and Serverless.

I'd much rather pay my cloud provider to run my k8s workloads (billed by pod
requests/limits) than pay for a control plane and three nodes just to run my
workloads.

------
ec109685
The nested kubernetes idea is interesting (linked to from the post). However,
amazon and google aren’t using nested control planes for their infrastructure.
Why is a single control plane good for them, but not for a Kubernetes
deployment?

------
Annatar
And people still wrangle with this... how can it be easier to struggle like
this than learn how to use SmartOS and Triton? Kubernetes is a solution to
severe OS virtualization deficiency in Linux, most notably orchestration. You,
know, the problem which is non-existant in SmartOS with Triton and large scale
configuration management with operating system packages. Every so often this
hits "Hacker News" and people will rather struggle than master something new.
But one cannot polish a turd.

~~~
jskaggz
At the risk of tarnishing my reputation amongst the hacker news
docker/kubernetes hypecycle elite, have an upvote. The I.T. industry in
general is funny. New technologies come and go like pop stars. Docker ==
ke$ha, Kubernetes == ice cube, triton is fred astaire. They all have their off
moments. I personally like my platform stable, performant, secure, and boring.
If I spent all of my time keeping up on the latest trends on how to spin up
machines, I'd have little time to work on actual product. Something good will
come out of this influx of cash, marketing, and cloud sales, eventually. Fits
and starts. /me goes back to coding and deploying on triton, while patiently
watching the docker/kubernetes show.

~~~
Annatar
That was a big risk you took. You have guts for standing up to the hype
machinery.

~~~
jskaggz
I'm still fighting the sneaking suspicion that putting kubernetes/etc out to
the general public and having such a fast release cycle was just a genius play
by the big cloud vendors to acquire customers (who will realize running this
stuff on premises isn't as cheap as they thought it was after doing the math
(all the math, security, training, operational expenses, personnel
training/expenses, moving from docker->moby->rkt->gvisor->firecracker->now vm
expenses, blah blah etc)). This current tech wave is kind of disheartening.
Everybody is focused on hosting...can we not get icecube to play a show for
folks that are pushing the envelope with technology as applied to the medical
field, or saving the environment, yo?

Triton on-prem is a snap. Boot the headnode from usb, boot the cluster nodes
from usb+pxe, and lets get to kicking ass, fighting the good fight focusing on
real groundbreaking applications.

*edit: I'm still a little butt-hurt after kubernetes being rammed down my throat in a large enterprise environment. Apologies to those that are fighting the good fight with kubernetes, I know you're out there, and big high 5 :)

------
xaduha
I'd be fine with that if they were using LKVM (from kvmtool) or that new
Firecracker. But as far as I can tell most projects in this area use QEMU
still, which is an outstanding piece of software, but doesn't scream micro-vm
to me.

------
k__
I had the impression DevOps ppl would cling to containers as the final form of
VMs?

But I'm mostly a high level front-end guy doing back-ends with serverless tech
only.

------
peterwwillis
I'm convinced that Kubernetes is very good for job security, and not much
else. Unless you're a managed services host, you should probably not be
running it.

Please, for the love of all that is holy, use a cloud services provider if you
need K8s-style service features. If you don't, then just cobble together your
infrastructure in the simplest way possible that uses DevOps principles,
methods and practices.

~~~
rantanplan
Could you elaborate a bit?

~~~
peterwwillis
K8s is a very complex system. And the more complex the requirements, the more
complex one needs to make K8s through additional software that isn't baked in.
Complex systems are costly to run, but more importantly, they're costly to
maintain due to the typical level of service required, the number of employees
needed to maintain it, and the amount of specialized knowledge required. The
system is also under constant maintenance due to its short release cycle.
Basically, you need to build an entire cloud services team just to keep it
running smoothly. (not for "test labs", but for real production services) And
on top of all this, if you're running it on your own hardware, you don't even
get the benefit of reduced infrastructure costs.

Because this is not only hard to get right, but very costly, it is much
cheaper and easier to pay someone to do all this for you. It is almost
guaranteed that doing it yourself will not give you any significant advantage,
cost savings, or increased development velocity.

On top of this, most people don't even need k8s. K8s is a containerized
microservice mesh network. If you don't need containers and you aren't running
microservices, you may be trying to fit a square peg in a round hole. Even if
you did need k8s, the benefit may be small if you don't have complex
requirements.

Most people can get high-quality, reliable end results with simple, general-
purpose solutions using DevOps principles and tools. If you're not Google or
Facebook, you probably just need immutable infrastructure-as-code, monitoring,
logging, continuous integration/deployment, and maybe autoscaling. You don't
need an orchestration framework to deliver all that. And by going with less
complex implementations, it will be easier and more cost-effective to
maintain.

At the end of the day, if you need k8s, use it. But I really worry about most
people who hop on the k8s bandwagon because they see a lot of HN posts about
it, or because Google touts it.

------
CyanLite4
TLDR: Kubernetes is an anti-pattern.

