Hacker News new | past | comments | ask | show | jobs | submit login

Kubernetes isn't even that complicated, and first party support from cloud providers often means you're doing something in K8s inleu of doing it in a cloud specific way (like ingress vs cloud specific load balancer setups).

At a certain scale, K8s is the simple option.

I think much of the hate on HN comes from the "ruby on rails is all you need" crowd.




> Kubernetes isn't even that complicated

I’ve been struggling to square this sentiment as well. I spend all day in AWS and k8s and k8s is at least an order of magnitude simpler than AWS.

What are all the people who think operating k8s is too complicated operating on? Surely not AWS…


The thing you already know tends to be less complicated than the thing you don't know.


I think "k8s is complicated" and "AWS is even more complicated" can both be true.

Doing anything in AWS is like pulling teeth.


The sum is complex, specially with the custom operators.


I guess the ones who quietly ship dozens of rails apps on k8s are too busy getting shit done to stop and share their boring opinions about pragmatically choosing the right tool for the job :)


"But you can run your rails app on a single host with embedded SQLite, K8s is unnecessary."


And there is truth to that. Most deployments are at that level, and it absolutely is way more performant then the alternative. it just comes with several tradeoffs... But these tradeoffs are usually worth it for deployments with <10k concurrent users. Which Figma certainly isn't.

Though you probably could still do it, but that's likely more trouble then it's worth

(The 10k is just an arbitrary number I made up, there is no magic number which makes this approach unviable, it all depends on how the users interact with the platform/how often and where the data is inserted)


I've been working with rails since 1.2 and I've never seen anyone actually do this. Every meaningful deployment I've seen uses postgres or mysql. (Or god forbid mongodb.) It takes very little time with yours sol statements

You can run rails on a single host using a database on the same server. I've done it and it works just fine as long as you tune things correctly.


> as long as you tune things correctly

Can you elaborate?


I don't remember the exact details because it was a long time ago, but what I do remember is

- Limiting memory usage and number of connections for mysql

- Tracking maximum memory size of rails application servers so you didn't run out a memory by running too many of them

- Avoid writing unnecessarily memory intensive code (This is pretty easy in ruby if you know what you're doing)

- Avoiding using gems unless they were worth the memory use

- Configuring the frontend webserver to start dropping connections before it ran out of memory (I'm pretty sure that was just a guess)

- Using the frontend webserver to handle traffic whenever possible (mostly redirects)

- Using IP tables to block traffic before hitting the webserver

- Periodically checking memory use and turning off unnecessary services and cronjobs

I had the entire application running on a 512mb VPS with roughly 70mb to spare. It was a little less spare than I wanted but it worked.

Most of this was just rate limiting with extra steps. At the time rails couldn't use threads, so there was a hard limit on the number of concurrent tasks.

When the site went down it was due to rate limiting and not the server locking up. It was possible to ssh in and make firewall adjustments instead of a forced restart.


Thank you.


Always said by people who haven't spent much time in the cloud.

Because single hosts will always go down. Just a question of when.


I love k8s, but bringing back up a single app that crashed is a very different problem from "our k8s is down" - because if you think your k8s won't go down, you're in for a surprise.

You can view a single k8s also as a single host, which will go down at some point (e.g. a botched upgrade, cloud network partition, or something similar). While much less frequent, also much more difficult to get out of.

Of course, if you have a multi-cloud setup with automatic (and periodically tested!) app migration across clouds, well then... Perhaps that's the answer nowadays.. :)


> if you think your k8s won't go down, you're in for a surprise

Kubernetes is a remarkably reliable piece of software. I've administered (large X) number of clusters that often had several years of cluster lifetime, each, everything being upgraded through the relatively frequent Kubernetes release lifecycle. We definitely needed some maintenance windows sometimes, but well, no, Kubernetes didn't unexpectedly crash on us. Maybe I just got lucky, who knows. The closest we ever got was the underlying etcd cluster having heartbeat timeouts due to insufficient hardware, and etcd healed itself when the nodes were reprovisioned.

There's definitely a whole lotta stuff in the Kubernetes ecosystem that isn't nearly as reliable, but that has to be differentiated from Kubernetes itself (and the internal etcd dependency).

> You can view a single k8s also as a single host, which will go down at some point (e.g. a botched upgrade, cloud network partition, or something similar)

The managed Kubernetes services solve the whole "botched upgrade" concern. etcd is designed to tolerate cloud network partitions and recover.

Comparing this to sudden hardware loss on a single-VM app is, quite frankly, insane.


If you start using more esoteric features the reliability of k8s goes down. Guess what happens when you enable the in place vertical pod scaling feature gate?

It restarts every single container in the cluster at the same time: https://github.com/kubernetes/kubernetes/issues/122028

We have also found data races in the statefulset controller which only occurs when you have thousands of statefulsets.

Overall, if you stay on the beaten path k8s reliability is good.


Even if your entire control plane disappears your nodes will keep running and likely for enough time to build an entirely new cluster to flip over to.

I don’t get it either. It’s not hard at all.


Your nodes & containers keep running, but is your networking up when your control plane is down?


Agreed, we're a small team and we benefit greatly from managed k8s (EKS). I have to say the whole ecosystem just continues to improve as far as I can tell and the developer satisfaction is really high with it.

Personally I think k8s is where it's at now. The innovation and open source contributions are immense.

I'm glad we made the switch. I understand the frustrations of the past, but I think it was much harder to use 4+ years ago. Now, I don't see how anyone could mess it up so hard.


There are also a lot of cog-in-the-machine engineers here that totally do not get the bigger picture or the vantage point from another department.


> I think much of the hate on HN comes from the "ruby on rails is all you need" crowd.

Maybe - people seem really gungho about serverless solutions here too


The hype for serverless cooled after that article about Prime Video dropping lambda. No one wants a product that a company won’t dogfood. I realize Amazon probably uses lambda elsewhere, but it was still a bad look.


I think it was much more about one specific use case of lambda that was a bad fit for the prime video team’s need and not a rejection of lambda/serverless. TBH, it kind of reflected more poorly on the team than lambda as a product


> Amazon probably uses lambda elsewhere

Yes, you could say that. :)


not probably, their lambda service powers much of their control plane.


ruby on rails is all you need




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: