The "virtual kubelet" essentially throws that all that away and keeps Kubernetes "in API only". For example, with virtual kubelets, scheduling is meaningless and networking and storage are restricted to whatever the virtual kubelet target supports (if useable at all).
Personally, I think the value proposition is tenuous -- you can create VMs today, doing so via the Kubernetes API isn't suddenly revolutionary. Just like throwing something in a "hardware virtualized" instance doesn't suddenly make the whole system secure.
Containers and Kubernetes are compelling for a variety of reasons, improving them to handle multi-tenancy is a broad challenge but I don't think the answer is to reduce the standard to what we have today (a bunch of disparate VMs).
Multi-tenancy is a pretty compelling value proposition when you reach any kind of scale. If you're in a regulated sector, it's non-negotiable.
Relying on the cluster as the security boundary is very effective ... and very wasteful.
> Containers and Kubernetes are compelling for a variety of reasons, improving them to handle multi-tenancy is a broad challenge but I don't think the answer is to reduce the standard to what we have today (a bunch of disparate VMs).
I think the argument is that rather than the painful (and it will be very painful) and probably incomplete quest to retrofit multi-tenancy into a single-tenancy design, we can introduce multi-tenancy where it basically actually matters: at the worker node.
At first glance it's confusing to go from "one master, many nodes" to "one node pool, many masters". But it actually works better on every front. Workload efficiency goes up. Security surface area between masters becomes close to nil.
Very cheap VMs are the means to that end.
Disclosure: I work for Pivotal and this argument fits our basic doctrine of how Kubernetes ought to be used.
Moreover, even today there are real public PaaSes that expose the Kubernetes API served by a multi-tenant Kubernetes cluster to mutually untrusting end-users, e.g. OpenShift Online and one of the Huawei cloud products (I forget which one). Obviously Kubernetes multi-tenancy isn't going to be secure enough today for everyone, especially folks who want an additional layer of isolation on top of cgroups/namespaces/seccomp/AppArmor/etc., but there are a lot of advantages to minimizing the number of clusters. (See my other comment in this thread about the pattern we frequently see of separate clusters for dev/test vs. staging vs. prod, possibly per region, but sharing each of those among multiple users and/or applications.)
Disclosure: I work at Google on Kubernetes and GKE.
Definitely I've had conversations with some of the project originators where it was clear the security boundry was intended to be cluster level in early versions.
Some of the security weaknesses in earlier versions (e.g. no AuthN on the kubelet, cluster-admin grade service tokens etc) make that clear.
Now it's obv. that secure hard multi-tenancy is a goal going forward (and I'll be very interested to see what the 3rd party audit throws up in that regard), but it is a retro-fit.
My complaint is that these require assembly and are in many cases opt-in (making RBAC opt-out was a massive leap forward).
Namespaces are the lynchpin, but are globally visible. In fact an enormous amount of stuff tends to wind up visible in some fashion. And I have to go through all the different mechanisms and set them up correctly, align them correctly, to create a firmer multi-tenancy than the baseline.
Put another way, I am having to construct multi-tenancy inside multiple resources at the root level, rather than having tenancy as the root level under which those multiple resources fall.
> there are a lot of advantages to minimizing the number of clusters.
The biggest is going to be utilisation. Combining workloads pools variance, meaning you can safely run at a higher baseline load. But I think that can be achieved more effectively with virtual kubelet .
Utilization is arguably the biggest benefit (fewer nodes if you can share nodes among users/workloads, fewer masters if you can share the control plane among users/workloads), but I wouldn't under-estimate the manageability benefit of having fewer clusters to run. Also, for applications (or application instances, e.g. in the case of a SaaS) that are short-lived, the amount of time it takes to spin up a new cluster to serve that application (instance) can cause a poor user experience; spinning up a new namespace and pod(s) in an existing multi-tenant cluster is much faster.
> But I think that can be achieved more effectively with virtual kubelet .
I think it's hard to compare virtual kubelet to something like Kata Containers, gVisor, or Firecracker. You can put almost anything at the other end of a virtual kubelet, and as others have pointed out in this thread virtual kubelet doesn't provide the full Kubelet API (and thus you can't use the full Kubernetes API against it). At a minimum I think it's important to specify what is backing the virtual kubelet, and what Kubernetes features you need, in order to compare it with isolation technologies like the others I mentioned.
Of course, hardening multi-tenant clusters is also needed. Even if the use case requires resource partitioning, there are use cases that don't and keeping one friend from stepping on another's toes is always a good idea.
For a single cluster, "very cheap" VMs solve some of the problems, but leave others unsolved (e.g. they prevent some hardware and kernel exploits, but lots of security issues can still hit you -- like the last two big K8s CVEs). They also sacrifice a lot of the things that make containers compelling on the floor (high efficiency and density), so I don't think they should be spun as a panecea.
You seem to be arguing that one shouldn't bother with multi-tenancy on a single cluster, which is a fine approach, but I do think that the technologies and tools to support the single cluster model are evolving. Calling it a "multi-tenancy retrofit" seems a bit FUD-y to me. Just because there are challenges doesn't mean it's not worth doing.
I was tying them together because I see the former as an effective strategy to achieve the latter.
> Calling it a "multi-tenancy retrofit" seems a bit FUD-y to me. Just because there are challenges doesn't mean it's not worth doing.
What should I call it? It's being added retrospectively to a single-tenant design. The changes have to be correctly threaded through everything, through codebases managed by dozens of working groups, without breaking thousands of existing extensions, tools and applications.
What I expect will happen instead is that it will be better than it is now -- which is a win -- but that no complete, mandatorily-secure, top-to-bottom security boundaries will be created inside single clusters. We will still be left with lots of leaks.
Our industry is replete with folks trying to wedge the business of hypervisors and supervisors into applications and services. It's possible but always leaks and breaks and diverts enormous development bandwidth away from the core thing that is meant to be achieved. Kernels and hypervisors have privileged hardware access and decades of hardening that can't be truly replicated at the application or service level and which when imitated need to be designed in from the beginning.
I don't see that as FUD. I think it just is what it is. But I appreciate that my thinking is line with the doctrine Pivotal advances to its customers, which differs from the doctrine Red Hat and others advance (One Cluster To Rule Them All).
If there’s Red Hat documentation advising silly absolutes please let me know and I’ll make sure it gets fixed.
For myself I see the argument for fewer clusters as about utilisation, the argument for more clusters about isolation. It's the oldest tug-of-war in computing. I think that shared node pools for multiple masters is going to be the combination that for most workloads will increase utilisation without greatly weakening isolation. I don't think multi-tenancy in the master will be as easily achieved or as effective.
To be honest, I should have realised this would be so.
> At least in OpenShift, multi-tenancy is a solved problem when cluster right-sizing has taken place. RBAC, node labels and selectors, EgressIP, quotas, requests and limits, multi-tenant or networkpolicy plug-ins go a long way.
Well, as you can guess, I am not convinced that this is really solved -- it looks like multiple discretionary access control mechanisms that need to be aligned properly, instead of a single mandatory access control mechanism to which other things align.
I've seen many clusters being sold, but with no tooling to automatically build, monitor, secure and maintain these clusters, so you've got a DevOps team playing cluster wack-a-mole.
Of course the consultancies love that because it's a bespoke layer for them to build and support, but the reality is setting up a small team to run a couple of clusters eases the job of discoverability and secops, and for many orgs is "good enough".
Still, there is room for improvement, buy I doubt it's many masters without another product on top.
Pivotal's doctrine of how to use Kubernetes is explicitly multi-cluster oriented, but that's because we come to the table with tooling that excels at this kind of problem: BOSH.
I definitely understand and agree that multitenancy is super important, but it would be a shame to agree that the bare metal performance is an okay sacrifice.
Even if we accept that server cost is key, Google realized long time ago that they can squeeze more out of their fleet if they can overcommit it because many workloads have variable utilization over time. Hence “containers” were conceived.
It really depends on scale. If I spend 15 minutes improving the utilization of my home cluster by 10%, the only payoff I get is experience. If I do the same to Google's indexing infrastructure, I probably delayed the collapse of civilization by global warming by a full year.
This attitude leaves a lot of low-hanging fruit that, as the operation grows, can shave a couple million dollars off the operating costs.
> Docker and Kubernetes have been a game changer in time-to-production.
> ...it's cheaper to buy the next tier of EC2 instance than to waste developer and operations staff time trying to squeeze more out of existing servers.
Only if your management is completely clueless. You can't solve people problems by buying machines or installing containers.
> It's about productivity per dollar.
No. Or, rather, only if by "productivity" you mean "clueless management KPI, meaning wasted company dollars per hour".
The root problem is higher-up management being unable to any set useful goals except "let's get investment capital and waste it ASAP to get more investment capital next year, growth, #yolo lol".
I'm on a team that provides a kubernetes-based internal SaaS thing and personnel costs seem to be easily recouped by the money we save from having autoscaling and a common node pool instead of 2 machines per service.
This has been going on for a number of years now and the most important part has always been to be backwards compatible with whatever developers have been doing for those years. That is productivity.
That's not to mention the leverage Kubernetes gives you to optimize and discover costs. Being able to see where optimization yields the best results is much better than optimizing everything. Sometimes it just isn't worth it.
Governance and security forbid people to be op and dev in one person.
Or God forbid: control the dev, pre stage and production environment.
Most businesses do not have a constant workload 24/7, which means the ability to scale up and down will save more money than reducing the overhead of not running directly on metal.
There is also the cost of having to care about hardware to begin with.
Also, in the big picture, having 100 companies doing their own bare metal deployments is not terrible efficient, compared to 1 company doing that and 99 paying the first company for this service.
The list could go on, but I think you get my point.
* The most businesses/always on reasoning appeals to executive decision makers and rolls down hill to the technical people who are best educated to make the
Many people who believe that you can cut cost by sizing workloads end up racing their own models when it doesn't scale financially or computationally over time.
* What is so difficult about hardware? The cost of forgetting how to deal with it will be much higher
in the long run.
* Yes, monopolies are healthy.
Seems all 3 major cloud providers, aws, google and azure, are providing facilities to run docker containers. Seems Digital Ocean is getting there too. Hardly a monopoly, init?
> What is so difficult about hardware?
All of it. This is knowledge my company doesn't have and there is no point in investing in accuiring this knowledge at this time, since we have an easier solution. Not to mention that if we decide going bare metal is the way to go, we can do that later.
> The most businesses [...]
I think my english is failing me, I don't really understand this paragraph.
> This sounds like classic devops/cloud snake oil. I wonder what industry pays your bills?
Not the snarky remarks industry...
If you have customers beating down your door to buy your product after you add n+1 feature, you can find investors to eat the extra operational cost without a problem as long as you can build n+1 fast enough that those customers don't go elsewhere.
The other element is elastic workload, of course. Having the application dynamically scale across a pool of machines in as tractable a manner as possible at the developer level can be an enormous cost saving all on its own. Instead of allocating machines based on individual high water marks, you can allocate a cluster based on the sum of average usage + as many standard deviations of resource usage to get as many 9s as you need.
Than i got told, that they have to control it, otherwise backup and restore is not supported and on the other hand, they were only able to install stuff like svn by spending virtual 3 project days! 3! I never asked what it would cost to install some etherpad.
Two years ago in a startup, i logged into aws, clicked around for an hour and had 2 isolated networks, 4 instances, a load balancer, dns server, snapshot and backup configured. A few mouseclicks later i could have had an autoscaler as well.
Ops / own Hardware is a means to an end.
The efficiency of the actual code executing on the hardware is secondary to the efficiency of being able to automate allocation of hardware at the application level.
We mix bare metal spun up this way with Open Stack VMs, depending on the requirements. And slice up into containers using LXD when that makes sense.
But everything else (disk, CPU, network) typically amounts to less than 1%.
- systemd and uwsgi for example play well when run as a single user per wsgi application under a single nginx/lb.
- php-fpm already handles a ton of overhead from php apps.
- ansible deployments called from gitlab-ci can roll out apps just as well as deploying from the registry.
then i figured maybe they were on to something with the autoscaling thing...but that seems like a meaningless feature. Every good project already has metrics and forecasting...it would be absurd to think a final product like imgur.com or twitter does not know (down to the byte) how much storage theyll need in 4 months and the potential drivers.
auto-scaling infrastructure just betrays the fact that most developers throw resources at load problems instead of waiting for ops to figure out the actual issue.
You need both developer and admin skills, as well as a pretty good understanding of the whole stack - the application, the framework/libraries, the application runtime/support libraries, the database, the kernel and the network.
It is correct that containers leak, and people know this. Multi-cluster strategies are real, and they shouldn't be. It should be OK to have one big cluster. Until Kubernetes fixes this, there will be some friction to adopt it, based on real use cases like untrusted code and noisy neighbors.
It is incorrect because users (e.g. non-infrastructure engineers) don't know or care about the precise definition of containers and VMs are. The point of "containers" is that I can define something that acts like an operating system from the ground up, and it builds quickly and runs quickly in production.
Kubernetes doesn't win by forcing users to think about VMs. Kubernetes wins by adopting a VM standard that can be built by Dockerfiles. Infra engineers will love it.
But besides them? Nobody will care, because Docker for Mac will look the same.
 Maybe 1 cluster per region? There's a whole fascinating topic that starts with the question "when building a PaaS, do you expose region placement to devs?" The answer implies a ton of stuff about what exactly it's reasonable to expect from a PaaS and how much infrastructure your average dev has to know.
This is what the CRI does/is, basically. Various projects sprung up to make it possible @ the runtime/kubelet level (kata-containers, frakti, containerd untrusted workloads), but support for runtimeClass is what's going to tie it all together and it's already in Alpha.
Personally I think we should be moving away from docker files -- docker's superior ergonomics pushed the industry forward at the outset, but they lagged in features and compliance with any standards for a long time and were basically usurped. The Dockerfile is a decent format but it lacks a lot of good qualities, like being trivially machine-editable, and the docker client itself has some unfavorable tradeoffs when compared with tools like rkt and podman. I think it's a mistake to standardize on Dockerfiles, but it's definitely a good idea to standardize on the CRI (which Docker now adheres to via a containerd shim by default).
The best thing by far about kubernetes is the CRI, CNI (Container Networking Interface), and CSI (Container Storage Interface) standards that are coming out of it. I don't think any one realizes it yet, but they are inadvertently building secure/sandboxed computing for everyone. Containers are just sandboxed processes, and before this, most people were running basically completely unsandboxed processes (both in the cloud and on their personal computers) -- once all this stuff lands, linux is going to have some amazing features for running applications more safely -- production-grade safety for any application you run on your own machine available with standardized tooling.
Assuming you're deploying your Kube cluster in the cloud, the costs of having multiple clusters is really reduced. You don't have to allocate physical machines or worry about utilisation as much - you just pick a node size and autoscale.
What that enables is thinking about other concerns when deciding how many clusters and where they are is right for your team.
There are operational reasons why having multiple clusters is a good idea. At the simplest level, making a config change and only risking a portion of the infrastructure is an example.
As an aside, something that's useful when thinking about Kubernetes multi-tenancy is to understand the distinction between "control plane" multi-tenancy and "data plane" multi-tenancy. Data plane multi-tenancy is about making it safe to share a node (or network) among multiple untrusting users and/or workloads. Examples of existing features for data plane multi-tenancy are gVisor/Kata, PodSecurityPolicy, and NetworkPolicy. Control plane multi-tenancy is about making it safe to share the cluster control plane among multiple untrusting users and/or workloads. Examples of existing features for control plane multi-tenancy are RBAC, ResourceQuota (particularly quota on number of objects; quota on things like cpu and memory are arguably data plane), and the EventRateLimit admission controller.
There's active work in the Kubernetes community in both of these areas; if you'd like to participate (or lurk), please join the kubernetes-wg-multi-tenancy mailing list: http://groups.google.com/forum/#!forum/kubernetes-wg-multite...
Also, I gave a talk at KubeCon EU earlier this year that gives a rough overview of Kubernetes multi-tenancy, that might be of interest to some folks: https://kccnceu18.sched.com/event/Dqvb?iframe=no
(links to the slides and YouTube video are near the bottom of the page)
Many teams use clusters for stages because they work on underlying cluster components and need to ensure they work together and upgrade processes work (e.g. terraform configs comes to mind). Theres no reason to separate accounts because the cluster constructs aren't there for security.
Considering it deeper (I haven't had to think about this for a while), I think multi tenancy would cover almost all of the use cases I've seen except for the platform dev where people use clusters for separation when testing cluster config-as-code changes.
The idea being that you have process around getting your code to run on the livedata cluster and this we add more stringent requirements for accessing each API.
This is for soft tenancy, and you want to write admission controllers to reject apps that haven't went through the defined process.
Edit: looking more in the thread, you clearly know this much better than I do. I'd like to get the chance to talk and improve my understanding, if you ever find some spare time.
As a gut check, most of these reasons apply to AWS AZs as well. If your Kubernetes strategy calls for more than one per region per AWS account, it means that you, organizationally, don't trust containers as much as you trust VMs on AWS. And right now, you're probably right to do so.
Treating a single cloud account and the clusters it hosts as a single security cell is more than reasonable, especially if you need to handle sensitive data, be able to audit the state of the cluster (hope your cluster can’t overwrite the contents of the S3 bucket that you’re using for cloudtrail logs, etc), or reason about the risk of compromise.
We run lots of multitenant kube clusters, and I would still recommend anyone who expects to grow to create hard walls between clusters and cloud infra as soon as reasonably possible.
Multiple Kubernetes clusters for workload separation and AWS accounts for workload separation aren't quiet the same and bring different levels of complexity depending on what your internal processes look like.
I am a firm believer of Borg style cluster OS, what Borg lacks is a modern API, but the fundamentals are already there.
Disclaimer: I am with Google's Borg team, with a focus on its client side.
And hell, you can nearly get that today. Combining Docker with gVisor is a potential solution to the soft tenancy problem as far as I can tell, and Kubernetes supports using it.
(And gVisor is by no stretch of the imagination a 'VM' - it is, at best, a tiny hypervisor, and maybe less than that.)
Most of the old big companies aren't using docker or k8s for their core services. They're all using legacy fat apps that are load balanced in baremetal or vms.
The point isn't that you need Kubernetes specifically, it's that requiring a system that scales well is not as uncommon as the OP puts it.
I had already conceded that. What's your point?
An m5.large instance (2vcpu/8gb) costs $70/mo on-demand ($44/mo with a 1 year reservation). A similar Fargate runtime costs $146/mo.
A b2ms Azure instance (2vcpu/8gb) costs $60/mo on-demand ($39/mo 1 year reservation). Azure Container Instances at a similar provisioning level costs $176/mo by my calculations.
That's not a small difference. That's, like, 3x.
Point being, I love virtual-kublet from the perspective of a scale-up just trying to get a product out the door. But for established companies, I still think the core idea of a container on a VM you control is going to rule. Fortunately Kubernetes allows amazingly easy flexibility to switch, and that's a reason why it might be the most important technology created in recent history.
I don’t know the prices for hyperconverged or converged infrastructure off the top of my head, but doubt the amazon rates are even close to competitive unless you’re at a tiny scale (or need a ton of tiny presences in different regions).
Kubernetes is simple enough to setup yourself. They have well documented tooling, and a solid do it yourself guide. OpenStack has none of that (that I can find). You select RedHat Openstack (RDO) or Canonical OpenStack (MAAS), and you have to use their all-in-one system to have a deployment- and that requires a narrow set of variables which every environment might not have. Which is insane- and will hinder adoption.
EDIT. Not 100% correct, see below comments.
Is it complex? Yes. Is it more complex than k8s? Probably. However, are there multiple open source distros outside of RDO (which is not actually a distro and is instead packaging--see tripleo for a distro like solution based on RDO). MaaS is not an OpenStack distro; it's a way to manage baremetal nodes that OpenStack is then deployed onto using other Canonical related tools. That said, selecting a way to deploy and manage openstack is complex, but the same with k8s.
You seem to know more than I do, so I got to ask. Why does openstack-helm exist? Why would anyone want to deploy OpenStack on top of kubernetes? Is it so you can have the OpenStack API run in Kubernetes that manages physical boxes?
The answer is that k8s offers something fundamentally different and until the person posing the question gets that distinction, the argument is relatively pointless.
I'm not just bowing behind the argument that "you just don't get it... man". Let me point out that you're right. You can manage the openstack control plane perfectly well with your configuration management tool of choice and if you have that process really dialed, then you'll have a difficult time improving upon it with something like k8s.
I've heard many stories of people who tried to run Kubernetes themselves in production and didn't have a great experience for it.
The security profiles of containers and VMs, including kernel-based VMs, are different. VMs still have a significant edge, because the attack surface is smaller and doesn't have many competing missions.
And let's not forget the recent CPU exploits which found that VMs aren't very separated after all.
The fact that Kubernetes disables this (and other) security features by default should be seen as a flaw in Kubernetes. (Just as some of the flaws of Docker should be seen as Docker flaws not containers-in-general flaws.)
Yes, though as capabilities are added to the kernel, the profiles have to be updated.
That said, VM or no VM, this should be done no matter what.
> And let's not forget the recent CPU exploits which found that VMs aren't very separated after all.
This is a nil-all draw in terms of the respective security postures, though.
I wonder how many multi-tenant workloads are actually at risk of an escape vulnerability. I wager that the multi-tenancy described in the article in the OP is actually disparate workloads across disparate teams in a particular enterprise where it seems (to me) fairly unlikely for someone with access to run a workload to also have the willingness to compile and run malicious code to take advantage of an escape vulnerability.
On the other hand, publicly available compute, i.e. AWS, GCP, Azure seems way more likely to be the subject of attacks from random malicious individuals seek to take advantage of an escape vulnerability if one existed.
Shared kernel linux containers can be hardened to the point, where they likely have a smaller attack surface than a general purpose hypervisor (for example look at the approach that Nabla takes)
You then have the hybrid approach of gVisor, still containers, but smaller attack surface than the Linux kernel.
Of course this hardening approach can (and should be) applied to VMs too, which may tip the balance back to them, which is one reason that firecracker looks so interesting.
It's not all or nothing either. Containerd will support running a mix of containers and kata-containers across workers.
For anyone interested in this topic I wrote about some other container runtimes here: https://kubedex.com/kubernetes-container-runtimes/
User admin and the reliance on client-cert authentication is one of the biggest weaknesses I see in k8s security at the moment.
There are obviously other options like OIDC available, but it can be tricky to set up and isn't on by default, so instead client certs are used for user auth. and given the lack of certificate revocation, they're really not suited for that.
"Compounding this is the fact that most Kubernetes components are not Tenant aware. Sure you have Namespaces and Pod Security Policies but the API itself is not. Nor are the internal components like the kubelet or kube-proxy. This leads to Kubernetes having a “Soft Tenancy” model."
That's like saying the Hypervisor isn't "tenant" aware.
I do think we’ve not explored enough of the per namespace policy stuff though - i’d like both podpreset and a reasonably simple scheduling policy (toleration + node selector control to replace the annotation based system) to make it in, as well as a simpler namespace initialization path so you can more easily lock down the contents of a namespace without having to proxy the create namespace API call.
Because it'd be neat to define that on a per namespace basis.
Being able to limit a user to only being able to edit one pod preset or scheduling policy (via rbac name access) would provide some useful flexibility for splitting control between admin and namespace user.
I'm still living in the world where operating a platform for the benefit of a set of developers entails building and operating a set of services that abstracts the details of the infrastructure sufficiently that these things don't matter.
The best part of the piece for me was "kubesprawl," my new favorite word for the week. We've seen it ourselves to some extent, but we are at least aware of it and try to exert some pressure in the other direction. Beyond that I am not particularly bothered by the idea of running lots of clusters for different purposes.
> Linux containers were not built to be secure isolated sandboxes (like Solaris Zones or FreeBSD Jails). Instead they’re built upon a shared kernel model ...
Solaris Zones and BSD Jails both use a shared kernel.
And I'd bet you they were far from perfect in security isolation.
Now it may be true that security wasn't Linux containers prime reason for being but we have an existence proof that they can be made secure enough -- anyone can get a trial Openshift container for the price of a login.
I’d say that’s less of a prediction than a matter of fact since it’s already happened in 2018 for AWS and GCP.
This post is kind of irrelevant for consumers imo, in that the future of Kubernetes is still the container interface, regardless of whether your vendor decides to run it in a container or a VM.
Interesting notion, but I don't see it. The reason for kubesprawl as it is today is a result of the fact that today Kubernetes is hard to tune for disparate workloads which leads most folks to just punt and stand up multiple clusters.
That said, people are starting to figure it out and more tools like the vertical pod autoscaler are coming. Eventually the more efficient choice will be to run disparate workloads across the same set of hardware.
I'd much rather pay my cloud provider to run my k8s workloads (billed by pod requests/limits) than pay for a control plane and three nodes just to run my workloads.
Triton on-prem is a snap. Boot the headnode from usb, boot the cluster nodes from usb+pxe, and lets get to kicking ass, fighting the good fight focusing on real groundbreaking applications.
*edit: I'm still a little butt-hurt after kubernetes being rammed down my throat in a large enterprise environment. Apologies to those that are fighting the good fight with kubernetes, I know you're out there, and big high 5 :)
But I'm mostly a high level front-end guy doing back-ends with serverless tech only.
Please, for the love of all that is holy, use a cloud services provider if you need K8s-style service features. If you don't, then just cobble together your infrastructure in the simplest way possible that uses DevOps principles, methods and practices.
Because this is not only hard to get right, but very costly, it is much cheaper and easier to pay someone to do all this for you. It is almost guaranteed that doing it yourself will not give you any significant advantage, cost savings, or increased development velocity.
On top of this, most people don't even need k8s. K8s is a containerized microservice mesh network. If you don't need containers and you aren't running microservices, you may be trying to fit a square peg in a round hole. Even if you did need k8s, the benefit may be small if you don't have complex requirements.
Most people can get high-quality, reliable end results with simple, general-purpose solutions using DevOps principles and tools. If you're not Google or Facebook, you probably just need immutable infrastructure-as-code, monitoring, logging, continuous integration/deployment, and maybe autoscaling. You don't need an orchestration framework to deliver all that. And by going with less complex implementations, it will be easier and more cost-effective to maintain.
At the end of the day, if you need k8s, use it. But I really worry about most people who hop on the k8s bandwagon because they see a lot of HN posts about it, or because Google touts it.