Hacker News new | past | comments | ask | show | jobs | submit login

This seems to be a common rhetoric from people who have not invested the time in k8s. It is not as complicated as people think.



I've used k8s at various times since it's inception. I've yet to see a compelling argument for running your control plane on the same layer as your workloads. I have always found this to be a recipe of disaster when 'something' in your service dependency chain breaks.

Also not having load balancer support built in kills it for me. Yeah you can do metallb or nginx ingress but it means they punted one of the major components needed to make it, 'cloud agnostic.'

Meanwhile in AWS we have ECS which works out the box on fargate, their EC2 hosts, or your own EC2 hosts and on hardware I can easily get (and dev) on docker swarm.


Kubernetes splits the control plane from the workloads. You have to run Kubelet on the individual nodes, but it's in charge of node-local orchestration; sure, you could maybe invent something thinner that was controlled with just with SSH or something, but you can't really get away with some kind of controller on the node itself. Something has to start containers and monitor their status, for one.

There are lots of production-quality load balancer implementations for Kubernetes, such as Traefik, as well as operators that configure cloud-native load balancers (e.g. the one that configures GLBs on GCP). I don't see this as missing functionality. The ingress story isn't great, but the available options (Nginx being a very solid example of something tried and tested) are good enough that it's more a problem of standardization, not implementation.

Neither of those two points seem to sufficiently argue against using Kubernetes. Having used it for a few years now for all sorts of workloads, I would never go back to plain VMs, nor to something like ECS or Mesos. The only alternative I might entertain would be "serverless", but the available offerings don't seem to cover all the bases (e.g. batch jobs).


Yeah I've used traefik (originally on Mesos though on Kubernetes as well). I mean it's okay, but I don't get security groups and I have to build stuff. I'm old. If you make me build commodity pieces and your competitor doesn't I'm going with the competitor.

ECS with Fargate behind ALBs talking to RDS/SQS/Elasticache with Scheduled Tasks as my cron layer is 99% of what we need without standing up a single host we have to maintain.

I put that in my calculator and it makes a happy face.


Fargate is actually one of the most expensive ways to run a container from a pure cloud cost standpoint.


Not if you take into account the fact that I don't have to maintain the hosts the jobs are running on (which is a not inconsiderable amount of time when you get HIPAA and PCI into the mix). Liberal application of AWS Savings Plans with 1 year no up front helps a lot too.

At scale I agree, however if you're a small company (or, in our case, 6 small companies) and don't have dedicated DevOps (or, in our case, have 1.5 DevOps people split 6 ways), it's fantastic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: