Hacker News new | past | comments | ask | show | jobs | submit login

I would really like to hear what's your proposed alternative for managing software. The world is digitising at a staggering pace and we have to deal with ever increasing density and complexity in software deployments. How would this look in your ideal world?



In my ideal world, it'd look a lot like SmartOS + NixOS, but that's an ideal. There is a massive middle ground between k8s and some hypothetical ideal, and k8s is completely on the "horrifying monstrosity that you shouldn't touch with a ten-foot pole unless you really have no other options" side of things.

Most server-grade operating systems include facilities that are robust, mature, compact, performant, and reasonably well-integrated, and for the things that aren't part of the OS, there is a long and glorious lineage of applications that can lay claim to those same virtues. Kubernetes makes use of many of them to do its work.

Those of us who've configured a router, load balancer, or application server independently are just perplexed when someone acts like k8s is the only way to handle these very common concerns. We're left asking "Yeah, but... why all this, when I could've just configured [nginx/Apache/haproxy/iptables/fstab]?"

The naive admin will say "because then you just have to configure Kubernetes!", but unfortunately, stacking more moving parts on top of a complex system typically hurts more than it helps. You'll still need to learn the underlying systems to understand or troubleshoot what your cluster is doing -- but then, I think part of what Google et al are going for here is that instead of that, you'll just rent a newer and bigger cluster. And I guess there's no better way to ensure that happens than to keep the skillset out of the hoi polloi's hands.

I assume that many "devops" people are coming from non-nix backgrounds, and therefore take k8s as a default because they're new to the space and it's a hot ticket that Google has lavishly sworn will make you blessed with G-Glory. But systems have been running high-traffic production workloads for a very long time. Load balancing, failover, and host colocation have all been occurring at most shops for +/- 20 years before k8s released in 2014. These aren't new problems.

Alan Kay has called compsci "half a field" because we're just continually ignoring everything that's been done before and reinventing the wheel rather than studying and iterating upon our legacy and history. If anything is the embodiment of that, it's Kubernetes.


I appreciate the lengthy reply and I sympathise with your concern regarding the cargo culting of technology trends. It's not the first time it happens nor the last. I disagree in general with your view and I think that past few years have brought tremendous innovation in the space: software defined networking, storage, compute; all available to the "hoi polloi", as open source high quality projects that are interoperable with each other and ready to deploy at the push of a button. And you know why this has happened? It's because Kubernetes, with all it's complexity, has become the defacto standard in workload orchestration, and has brought all the large players to the same tables, scrambling to compete on creating the best tools for the ecosystem. I am not naive to think Google didn't strategise on this outcome but the result is a net positive for infrastructure management.

I also sense a very machine centric view in your message, and there is a certain beauty in well designed systems like SmartOS and NixOS. But you are missing the point. The container orchestration ecosystem, for all it's faulty underpinnings (Linux, Docker, Kubernetes), is moving to an application centric view that allows the application layer to more intelligently interact with and manipulate the infrastructure it is running on. Taking into consideration the Cambrian explosion in software and the exponential usage of this software (tell me an industry that is not digitising?), this transition is not surprising at all.

Regarding the complexity of Kubernetes, some of it is unavoidable, especially considering everything it does and the move from machine centric management to cluster centric management. There are other tools that are operationally simpler (Docker Swarm, Nomad), but they definitely don't offer the same set of features out of the box. By the time you customise Nomad or Swarm to feature parity with Kubernetes you will end up with a similarly looking system, perhaps a bit more well suited for your use case. The good part is that once an abstraction becomes a standard, the layers underneath can be simplified and improved. Just take a look at excellent projects like k3s, Cilium, Linuxkit and you will see that the operational complexity can be reduced, while the platform interface maintained.

To summarise, I am very happy that Kubernetes is becoming a standard, and I am convinced that 30-50 years from now we will look at it as we look now at the modernisation of the supply chain which was triggered by the creation of the shipping container.


First, thanks for your response, it's been a good discussion.

I agree that there's a great deal more technology being made publicly available and that said technology is beneficial in the public eye. I don't necessarily agree that that technology is needed by most people, despite the Cambrian explosion you reference.

On machine centrism, if anything, containers have increased the importance of systems concerns, because now every application ships an entire userland into deployment with its codebase. As long as containers come wrapped in their own little distribution, each code deployment now needs to be aware of its own OS-level concerns. This is anything but application centric. If you want application-centric deployment, just deploy the application! True application-centric deployments are something like CGI.


A better understanding of the hardware+software stack and a return to fundamentals instead of piling frameworks sky-high on top of each other making it very hard to impossible to debug which layer is buggy when things go sideways.

> complexity in software deployments

Reduce complexity


So no solution then. Kubernetes isn't just complexity for the sake of it.

It is designed to solve a very real and very hard problem.


> It is designed to solve a very real and very hard problem.

Completely agree. The thing is that the problem it's designed to solve has been badly misrepresented.

If your problem is "At Google, we have fleets of thousands of machines that need to cooperate to run the world's busiest web properties, and we need to allow our teams of thousands of world-class computer scientists and luminaries to deploy any payload to the system on-demand and have it run on the network", then something like Kubernetes might be a reasonable amount of complexity to introduce.

If your problem is "I need to expose my node.js app to the internet and serve our 500 customers", it's really, really not.


And it's probably used by many people who don't know how to optimize a query, so they go for big data clustered solution instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: