Hacker News new | comments | show | ask | jobs | submit login
Ask HN: Is Kubernetes too complex for most use cases?
17 points by ojhughes 5 months ago | hide | past | web | favorite | 20 comments
I understand why people choose K8S as a platform but is the trade off worth it considering how much complexity it introduces? There so many moving parts that could cause things to go wrong and be very difficult to troubleshoot.

At this stage I am still leaning towards plain old VMs in the cloud + something like Consul for service discovery. Maybe I am being overly cautious or missing a fundamental piece of the picture?




It depends on how you define "most use cases".

If you need containers to solve the "it works on my workstation" problem, and you need to make your containers run in production and scale them easily, and you have lots of applications to setup and deploy, then Kubernetes is fantastic. But if you can get away with just running your application in plain VMs and the burden of maintaining a Consul-based app discovery isn't a problem for you, your proposed setup makes a lot of sense.

Custom infra has an operational cost that is proportional to the number of applications. Kubernetes has a high fixed operational cost that you always incur, whether you have 1 application or 20. Once your operational complexity is high enough, it makes sense to use Kubernetes.

Soon higher level solutions will come along that will delivery the same advantages of Kubernetes but remove the learning curve and ops cost of Kubernetes.


The higher level solutions already exist, they're just use kubernetes as a service from one of the providers. Largely, anything simpler would probably just be hiding complexity from you, which probably isn't better.

@OP, I'd probably use GKE.


I think if you ask, it probably is too complex in your situation.

One thing I think it's valuable to keep in mind is that if you'll have success, you'll go through multiple tech eras, and you'll have to change (pick you favorite) language|service|cloud|etc.

If you choose not to use k8s today, and in 2 years it will have the features you'll need with the right amount of complexity for you, then that will be the right time for you to introduce k8s. If you do choose k8s today, in 2 years you'll have to do a change anyway, either in this or another domain.


I actually talked about this in Linux Foundation conf there I mentioned if you follow 12 factor app design, kuberentes makes perfect sense, if not, VMs might be actually better choice for you. (watch the talk here youtube.com/watch?v=FcNILuwmipA )


Depends, if you are just going to do simple deploys then its' not worth it. If you are going to make good use of it, then sure. Besides deployment, the labeling feature, namespaces, bin packing of pods, container resource constraints, declarative nature which gives you resilience and scaling. If you use all that it's worth it. If you don't have a scaling problem and don't deploy often. Then probably not.


I would be interested to learn from companies that have been using K8's in their production datacenters and running critical complex applications, how much ramp up time their operations teams needed to learn how to troubleshoot system and network anomalies that don't tend to surface themselves in the typical system logs.


My deploy times came down since I started using K8. Deploy just takes 20 seconds for me now. I've not measured the performance lost. Reliability/speed of rolling features has improved a lot.


Worked with plain old VMs + systemd + Consul + Ansible in a four datacenter setup with 200+ VMs 30+ services with no issues.


"Is too complex?" it is not right type of question. "How many people should work exclusively on K8S?" is right type of question.


My concerns are around complexity, not just complexity in deploying K8S but complexity introduced debugging application issues (eg TLS authentication to my app broken due latest upgrade)


It is not about deploying it is about keep it running smoothly for users.


I would argue that (1) Kubernetes isn't that complicated, and (2) you're paying a one-time cost in complication that, when managed correctly, gives you an operationally much simpler substrate to run apps.

To explain, consider the situation with bare VM, managed with something like Puppet/Ansible/Salt/Chef, with SSH access, iptables, Nginx, etc. -- a classic stack where you address individual nodes, which you may add/remove somewhat dynamically, but where node identity does matter a bit because you have to think about it. You need monitoring, you need logging, you need some deployment system to clone apps onto the nodes and restart them, and so on. Whatever you choose, it's going to be something of a mish-mash of solutions. Most of your config goes into the configuration management engine (Puppet or whatever), which has a data model that maps a static configuration to a dynamic environment -- a model that, having used it for 10+ years, is inarguably rather awkward. You have to jump through all sorts of ugly hoops to make a Unix system truly declarative and reactive. It wasn't made for it. Unix isn't stateless. For example, many adventures in package management has shown that deploying an app -- whether it uses RubyGems, NPM, PIP, Go packages or whatever -- in a consistent, encapsulated form with all its dependencies is nigh impossible without building it once and then distributing that "image" to the servers. You don't want to run NPM on n boxes on each deploy. Not only is it inefficient, there's also no guarantee that it produces the same build every time on every node, or even that it will work (since NPM, in this example, uses the network and can fail). Just this problem alone demands something like Docker. Then there's the next step of how you run the damn app and make sure that it keeps running on node failure.

Kubernetes is a dynamic environment. You tell it what to run, and it figures out how. It's a beautiful sight to behold when you accidentally take a node down and see Kubernetes automatically spread the affected apps over the remaining set of nodes. It's also beautiful to see the pod autoscaler automatically start/stop instances of your app as its load goes up and down. It also feels amazing to bring up a parallel version of an app that you built from a different branch and only receives test traffic because you're not ready to deploy it to production quite yet. It's super nice to create a dedicated nodepool, then start 100 processing jobs that will queue up and execute when the nodepool has enough resources to run the next one. Kubernetes turns your cluster into LEGO blocks that can constantly shift around with little oversight. I'm never going back to a basic VM, not even if I'm running a single node.

Now, if your choice is not between Kubernetes and "classical VMs" but between Kubernetes and some other Docker-based solution, then... I would still choose Kubernetes. There are so many advantages, not least the ease with which you can transfer an entire orchestration environment to your developers' laptops -- Kubernetes runs fine locally, and all you need to replicate the same stack is a bit of templating. (We use Helm here.) The competition just isn't as good.


Thanks for the insightful response. Declarative, classic Unix VM automation definitely is filled with work arounds and hidden surprises.


Is there something so wrong with "...building it once and then distributing that "image" to the servers."? Seems to be a rather simple solution to deployment.


You can absolutely build your app once, tarball it up, and distribute it, and some people do/did it this way.

But you will have to write that system yourself, and there are some challenges involved. For example, if you have binary dependencies (either executables or shared libraries — even if you use an interpreted language like Ruby or Node.js, third-party packages often pull in shared libs), you will have to make sure they're (1) included, and (2) either statically linked, or that your servers are running the exact things they depend on (things like libc), and (3) that the architecture is the same (probably moot in these 64-bit days). It might be possible to write a little script that finds all binary dependencies and includes them in the tarball.

That said, packaging and distribution is the more trivial aspect of all of this. Running an app is the hard part, and that's where Kubernetes really shines.


Hmm I would argue that both are equally challenging problems, that said, RPM for example has solved this problem for a while.

If you're going to deploy on say a RHEL7 distro, you can package your software in an RPM, declare its dependencies and have it run in a VM alone...

Or if you want to be even fancier you can use something like the Nix package management. I don't think Docker solves at all the problems with dependencies, it just creates an isolated environment to run something.

You will occur the cost of virtualization at runtime which is higher than a container and maybe managing the VMs through something like vSphere but I still think it's much easier to manage than something like Kubernetes. Lot easier to hire for as well.

That said, if I could use the cloud I would probably use managed K8S by GKE :), on premises, gotta evaluate the trade offs.


I've heard of a few people using RPM for app deployment, but the fact that it's not a widespread solution shows that it's not ideal. It has several issues. For example, it ties your apps to a very specific distributions.

Depending on how liberal and flexible (e.g. Nix is probably much better than RPM here) your package system is, conflicts can also be problematic. One package specifying Ruby 2.1 might conflict with another specifying 2.2. I don't know RHEL specifically, but on Ubuntu, tracking concurrent versions of things like Ruby and Node.js has historically been a pain, often requiring the installation of third-party packages that are designed to allow concurrent versions to live side by side without conflict.

Conflicts can also occur at the system level. One app might want a certain libxml2 or libreadline or whatever, and that happens to be used by some system software that breaks on the version that the app wants. In other words, a single app now can break an entire node.

The only way to fix these things is isolation.

Again, packaging is the least interesting problem. After all, if you have Docker or jails or whatever, you can package your app and then manually start it. But orchestrating code in a declarative, self-healing way is what makes Kubernetes such a powerful system. The isolation you get from Docker is a foundational concept, but the way it's done is mostly an implementation detail.


Yep that's why I mentioned one vm per service. Is more isolated than Docker.

When you say it's not widespread I am not sure what you mean? There are 1000s of RPM packages available including things like elasticsearch.


There are RPMs for all sorts of general purpose software, of course, but it's rare to deploy one's own apps (as in the one that runs your product or whatever) via RPM.


I don't think it is. Redhat and centos are very common server platforms.

Maybe in startup world it is but older companies have been deploying own apps with rpms for a while. You can download RPM for multiple paid software apps today.

I am very surprised when I want to install an app and don't see it to be honest.

When I use RPM for my own apps I think of it as a self contained blob of data that has everything required to run an application . Basically a tarball on steroids which let's you version, upgrade and list dependencies.

For example of you are doing a django app it's a decent way to not depend on pip and/or the external world to deploy your software and achieve immutable builds.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: