I'll never understand these arguments against Kubernetes and its complexity that are so prevalent on HN. Yup, k8s has a learning curve, just like any new technology. You'll need to spend a couple days understanding it. But once you've grasped the abstractions it's actually quite easy to setup, operate, and manage, even for small side projects. I'd pick it any day over "third party magic" that is a black box and I have no control over.
It feels like many developers these days are so spoiled by magic services that they are unwilling to even spend a few days going deep into something. Everything has to work in minutes. Next, what inevitably happens is that services shut down, pricing changes, or something stops working, and they have no way to debug it or move off it. And then we get customer support complaints and posts on HN about it. These developers look for the next shiny 3rd party service that solves the problem immediately and repeat the cycle, without ever learning anything that helps in the long term.
> I'll never understand these arguments against Kubernetes and its complexity
Because my project doesn't need the benefits that K8s offers. Why choose the more complex solution when I don't need it? And judging from experience and what others have shared, most projects will not need it.
> no way to debug it or move off it
If a managed service is down, the host company debugs it. That's the whole point of a managed service, so I don't have to do devops.
Why would there be no way to move off of it? PaaS is so easy to onboard and deploy, you can move to other services easily.
> It feels like many developers these days are so spoiled by magic services that they are unwilling to even spend a few days going deep into something
Why don't you go all the way and build your own physical servers instead of using these magic cloud machines? That way you can go deep into it, and if they have problems, you can debug yourself.
If you enjoy building and maintaining your own K8s clusters and you feel it benefits you, great for you. But don't be so condescending to people who don't feel the same and choose the simpler infra solution because they'd rather go deep into building their application rather than spending time with K8s. Calling them spoiled for that is just obnoxious.
> Because my project doesn't need the benefits that K8s offers.
So what? It just takes a month long time investment to be used to kubernetes(many people already have some experience from working for employers), and after that it takes one or two days extra days to make some project as easy to deploy as heroku. So basically it is a fixed learning cost. Obviously if someone is looking to get something up as soon as possible and doesn't have kubernetes experience, it is better to not use it.
But for me, the fixed learning cost clearly paid off. Kubernetes is lot better than alternatives if you want to assign fine grained securities, namespaces, complex scaling rules, using multiple clouds, multiple node types etc. Even if I don't need any of these now, I am not giving any extra time investment for new projects and it guarantees I don't need to do any migration to some other form of deployment if my project becomes big.
> It just takes a month long time investment to be used to kubernetes
Ok so next time I'm laid off I will have some tech to choose learning. May I will spend that month.
Even when I wasn't a parent I don't think my life was that open that I could say well it only takes a month, I guess I will learn that. Lots of things take a month to learn. people pick and choose.
> Yup, k8s has a learning curve, just like any new technology.
It's more than the learning curve, though. It also requires resources for itself (the control plane) so it's not really suitable for small projects unless you use a managed offering (which has the same downsides as you mentioned in your second paragraph).
Also you make it sound like all Kubernetes alternatives are 3rd party, paid, hosted services, which is far from the truth.
That is only partially true. So you spin up a GKE cluster, setup your deployment push it out via kubectl. OK your app is running but now you need access to it. The portable way is a Service Loadbalancer but it's just a TCP loadbalancer. So you go for the Ingress API. Then you want to do it a little bit more, you learn that the Ingress controller on GKE just configures you a L7LB at Google. Nice, that can do what I want. I want it to run dual-stack IPv4 and IPv6 (my prior example for those shortcomings in GKE was setting response-header but that was added lately after only 3yrs). Oh snap supported by the LB but not by the Ingress controller. Then you dig deeper and learn that development already shifted from the Ingress to the Gateway API. And now you're already knee deep into problems, because what you want to do is not really part of the Ingress or Gateway API and now you're at the mercy of the vendor you choose. Or you run a vendor neutral Ingress controller, like the classic nginx one. That later choice means you've to make yourself familiar with the oddities of that component as well. And then you also want something for DNS, Let's Encrypt and so on. Half a dozen controller installations later you finally have something. But now you've to maintain it because the managed service is only for k8s.
But one should not forget that you also had to build up a lot of vendor specific know how in the past. Someone had to configure your F5 BigIP and your Juniper Router and the Cisco Switch and of course the Dell or HPe boxes you bought.
I take more concerns with k8s immature ecosystem which is kind of reinventing classic unix stuff for distributed computing. And that just started and you've to lifecycle components with breaking changes every few weeks. And people took issues with updating Ubuntu LTS releases every two years. Now they have to update some component every week.
I don't know about every week... I ignore my k8s setup for 6-12 months at a time. Once in DigitalOcean bugs me to upgrade k8s and that, I admit, has been a bit of a disaster in the past.
I don't know. I had a pretty good thing going prior to k8s too, just some rsync and `ln -sfn` and it was easy, simple and very fast, but like you said, upgrading Ubuntu and PHP and other services becomes the problem there. Couldn't do that without downtime.
This has been my experience with kubernetes as well.
Look I can and have done all these things, but it's just not worth my time to do them for my little apps. I'd rather be talking to customers and shipping features at this point in my career.
Currently dealing with almost exactly this using Citrix LBs and k8s. You can't even really tell what is happening with the Citrix ingress controller when things break. :-/
IMHO, this is a too simplistic view of things. Personally, I don't judge a tool by how easy to start with or how easy it is to understand but by how easy it is to fix something when things begin to fall apart. In this field, K8s is a magic box. You can understand Linux kernel in a couple of hours, yet you'll need years with it to start debugging nasty things.
> It feels like many developers these days are so spoiled by magic services that they are unwilling to even spend a few days going deep into something.
Quite the opposite. K8s are a big opaque ball of magical complexity to most of us devs for sure (as is heroku). However, for 100% of my use cases nothing should be more complex than the database.
I’m not opposed to learning about it, but reducing complexity wherever reasonable has paid off for me.
I have a hypothesis around why people think it is complex. In my opinion it is not very complex, But the way of working, interfaces that developers deal with and formal process around k8s creates a perception that it is complex. It is also all the tools around k8s that makes it feel complex. You don't need istio, kyverno, OPA, all the other million tools from the cncf landscape to deploy your simple rails app. The industry had to paint that operating k8s hard so that they could sell tools around it. Currently version of k8s and all the stripped down versions are much easier to operate than before.
It barely takes a weekend to learn and play around with managed k8s from cloud providers or with minified k8s tools like k3s, k0s etc.
I think it's fair to say that it's considerably more complex than a single (or say 2-3) linux VMs. I can see that managed kubernetes would make things simpler. But managed kubernetes is also just as expensive as other managed solutions (more expensive for small projects).
The problem is that the surface area for k8s is too large for one person to truly understand it. Sure, you can get up and running in a few days, but good luck when your cluster mysteriously stops working and ignores all kubectl commands.
In the past 6 years of working with Kubernetes in production every day, I've never had that happen to me, so it doesn't sound like the error case I should optimize for.
I dig the independence sentiment and agree somewhat / in general on the "should know how it works underneath" part but Heroku is still the one platform I can recommend to most of my exclusively-dev friends.
The systems evolution that led us to Kubernetes does have it merits of course:
I just know I can trust the simple foundations of control loop meets immutability in order to maintain complex distributed systems, including expectations for self-healing and resilience. Though IMHO the "freedom" aspect you hinted on above is the more important one these days - yet usually cloud platform dependence unfortunately creeps in, in other ways still.
Personally though I bet that for an easy 80% of "need to deploy software" cases, Kubernetes is indeed overkill - as long as it isn't "managed" / abstracted away at least to the level of a Heroku. And of course there are many other ways and platforms to benefit from "containers" and their promise (?) of independence these days.
All power to anyone though if they have the money/time/energy to put work into that layer, in addition to whatever else they are trying to achieve. Disclaimer being that learning can definitely be its own merit (that's how I got started myself) but it's important to know when it's "just" that, when it's more and when it's simply overkill.
My issue is that I have 40 dockerized containers set up behind a reverse proxy and, although I want to play with k8s to learn, I've already dumped so much time and effort into docker-compose that I'm cautious about migrating to a new system. Is the architecture substantially different?
I went down that road a few years ago. Dockerized my app because I figured that was the first step. Then naturally docker-compose to bring the pieces together, right? No. Was upset that was not the next step to Kubernetifying an app. Use Kustomize instead. It's not that bad. Did take days or a few weekends but it requires very little maintenance now and its easy to spin up new apps.
It feels like many developers these days are so spoiled by magic services that they are unwilling to even spend a few days going deep into something. Everything has to work in minutes. Next, what inevitably happens is that services shut down, pricing changes, or something stops working, and they have no way to debug it or move off it. And then we get customer support complaints and posts on HN about it. These developers look for the next shiny 3rd party service that solves the problem immediately and repeat the cycle, without ever learning anything that helps in the long term.