Hacker News new | comments | show | ask | jobs | submit login

It's not just about scaling. That seems to be the only thing people talk about because it sounds sexy but the reality is about operations.

Kubernetes makes deployments, rolling upgrades, monitoring, load balancing, logging, restarts, and other ops very easy. It can be as simple as a 1-line command to run a container or several YAML files to run complex applications and even databases. Once you become familiar with the options, you tend to think of running all software that way and it becomes really easy to deploy and test anything.

So yes, for personal projects a single server with SSH/Docker is fine, but any business can save time and IT overhead with Kubernetes. Considering how easy the clouds have made it to spin up a cluster, it's a great trade-off for most companies.




Exactly. It solves some of the most important problems that come up of working with microservice based architectures, and establishes mature patterns around the ability for multiple developer teams to update and scale each piece of a distributed application.

Also, what do you do when your digital ocean droplets needs hundreds or thousands of customers on it? Maybe each customer needs it's own database (well in my case they do), configuration, storage requirements, and multi-node requirements. How do you keep track of all that, how do you automatate it and QUICKLY recover in failure scenarios. You need to be able to deal with failure on a node, or if there's a bad actor you need to be able to move them off easily without downtime or affecting other customers, automatically, seamlessly. You need to be able see an overview of your resources across all nodes and where apps are placed and have something decide if the hardware your new container is being added to can handle another JVM or whatever. For cost effectiveness, you want to be able to overcommit resources, so you want containers. You want those to translate to other platforms, AWS, google, azure, on-prem. You have a single declarative language that works anywhere you can deploy a k8s cluster. You need to deal with growth, good patterns for rolling back and updating versions of parts of the stack. You want all of your deployments to be declarative and to be able to tightly control the the options for each one and get back to where you were.

I agree that it doesn't make sense for everything, and it requires fundamental understanding of linux and software before it even makes sense to try shoehorning onto, but it solves real world problems for many people, it's not just a hype thing. I would say docker itself was more of a hype thing than k8s, the maturity and features of k8s and other orchestration systems that came out of the docker model is there for a reason, because it solves all of the real-world problems people couldn't solve with vanilla docker without tons of custom scripting and hacky workarounds. Docker solved the big problem by providing the isolating environments for each app and splitting things out into microservices that way, without having to commit a full statically resourced VM or bare metal per service. K8s solves all of the other problems that came out of that (pods, stateful sets, init containers, jobs, cronjobs, service definitions, deployments, volume claims.)


Exactly!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: