Hacker News new | past | comments | ask | show | jobs | submit login

> Had to do a lot of work to get node utilization ... higher than 50%

How is this the schedulers fault? Is this not just your resource requests being wildly off? Mapping directly to a "fly machine" just means your "fly machine" utilization will be low




I think there’s a slight misunderstanding - I’m referring to how much of a Node is being used by the Pods running on it, not how much of each Pod’s compute is being used by the software inside it.

Even if my Pods were perfectly sized, a large percent of the VMs running the Pod was underutilized because the Pods were poorly distributed across the Nodes


Is that really a problem in Cloud environments where you would typically use a Cluster Autoscaler? GKE has "optimize-utilization" profile or you could use a descheduler to binpack your nodes better


DX might be better I suppose, since you don’t have to fiddle with node sizing, cluster autoscalers, etc.

Someone else linked GKE Autopilot which manages all of that for you. So if you’re using GKE I don’t see much improvement, since you lose out on k8s features like persistent volumes and DaemonSets.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: