Hacker News new | comments | show | ask | jobs | submit login

I don't understand what the operational burden is. We literally do nothing to our K8s cluster, and it runs for many months until we make a new updated cluster and blow away the old one. We've never had an issue attributed to K8s in the 2 years we have been running it in production. If we ever did, we'd just again deploy a new cluster in minutes and switch over. Immutable infrastructure.

It is not like I haven't done it the "old" way. I spent many years doing hand deploys, making deployers, running Ansible/Chef. It is just that we always found we can never confidently update servers running many apps as it would step on other applications. So we'd just make new ones, test and switch. This was not an easy process either. Plus we'd encounter issues like oh someone didn't make a startup script or filled up /var with logs, or had something eat up all the memory. All of these operational problems are gone with K8s. I know what you are thinking "well you did it wrong". Yes sometimes developers do things wrong. But in container/K8s land that wrong stuff is contained, and if you don't do things "right" you can't even run.

So we had operational issues there. Now we have a universal platform that someone can ship their app anywhere and have it run the same. That is a huge win. All for no extra work.




Operational burden comes when you have to troubleshoot an issue. Simply deleting and recreating doesnt solve reoccurring problems.


I have had the same experience and the same journey from hand deploys, using configuration management, and all of that.


Why is your comment is gray?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: