I feel like I was lied to about what kubernetes is and comics like this sorta partake in it.
The sales pitch is basically akin to "mongo is webscale" but "kubernetes is scaling/portability/reproducability."
Maybe my workplace just did it wrong, but the way they've set up kubernetes is exponentially more work than just scaling up (akin to an AWS autoscaling group). It has its own networking system, competing languages for manifests, competing auth mechanisms, multiple ingress/routing mechanisms, different CI/CD tools, helm/kustomize, multiple different competing scaling mechanisms that can sometimes be used together. And don't get me started on CRDS and such.
In some ways it's closer to build your own AWS than it is just autoscaling. Which sort of doesn't make as much sense to me if your company is full of 10-year experts on AWS and you're suddenly reinventing the wheel on all of this.
Which isn't to say it's strictly bad, but when I see what my work team has built it's very clear that local testing is well out of the realm of possibility and their work has already lead to multiple outages. And most of the administration isn't "clickops" but IMO it's even worse -- people typing admin commands at their console with no review -- type-ops.
It wasn't the UI that made click-ops bad, it was the fact that it was a bunch of manual admins changes not using PRs and automation.
Kubernetes is very much a choose your own adventure, and it’s very easy for your infra people to choose an adventure through Dante’s 12 levels of SaaS hell.
It doesn’t help that the popular tools like Helm or Kustomize are still so over-general and YAML-happy that they only seem to add complexity.
If I was going to brave these waters again, I would probably write my own tooling / templates with very narrow customization per “service”, and all the YAML generation would be done from TypeScript or similar function calls & composition, or at worst Terraform. The Kustomize or Helm stuff that has YAML tempting YAML… painful.
If you want kubernetes just for autoscaling, you can just use deployment with autoscaling. It's probably few 10s of lines to do this. And 10 lines for service/ingress.
In fact the complain that you need to learn helm or networking or auth or CRDS is unfair. If you don't want all the goodness of helm you can do it the old fashioned way, ie creating your own docker container and using deployment.
I understand your point -- is my complaint about kubernetes itself or just the way that people at my job set it up?
But in practice perhaps my feeling is that if a tool/ecosystem has too many viable/competing ways to do things and/or it makes adding complexity too easy, the it does encourage devs to get way out their depth. That's less of a problem if it's some tiny service only they have to deal with, but a threat to the whole company if they're using a multimillion dollar company's uptime as their little playground.
That may be less of a criticism of kubernetes itself and more of the ecosystem/trend, but I still think it's relevant. I'd compare it to this - I have often found frontend intolerable not because the tools are bad, but because the rate-of-change of the "in" tools is so excessive.
The sales pitch is basically akin to "mongo is webscale" but "kubernetes is scaling/portability/reproducability."
Maybe my workplace just did it wrong, but the way they've set up kubernetes is exponentially more work than just scaling up (akin to an AWS autoscaling group). It has its own networking system, competing languages for manifests, competing auth mechanisms, multiple ingress/routing mechanisms, different CI/CD tools, helm/kustomize, multiple different competing scaling mechanisms that can sometimes be used together. And don't get me started on CRDS and such.
In some ways it's closer to build your own AWS than it is just autoscaling. Which sort of doesn't make as much sense to me if your company is full of 10-year experts on AWS and you're suddenly reinventing the wheel on all of this.
Which isn't to say it's strictly bad, but when I see what my work team has built it's very clear that local testing is well out of the realm of possibility and their work has already lead to multiple outages. And most of the administration isn't "clickops" but IMO it's even worse -- people typing admin commands at their console with no review -- type-ops.
It wasn't the UI that made click-ops bad, it was the fact that it was a bunch of manual admins changes not using PRs and automation.