Hacker News new | past | comments | ask | show | jobs | submit login

Funny you should mention that. I run a large vSphere cluster (that I inherited) now -- large meaning several hundred VMs. Live VM migration is different because it happens totally transparently; from the guest's perspective, there is no disruption at all. On k8s, pods are recycled all the time and afaik there is no "live" migration of pods that doesn't involve killing and restarting the process. k8s's "vaporize the pod first" culture is basically the opposite of enterprise-grade hypervisors, which exist in large part to minimize incidents that would require the destruction of state, even in the face of hardware failure.



True enough, though I would posit that k8s’s strategy (no live migration) makes sense if you assume that you’re running k8s on top of a VM cluster that has its own live migration, such that you’ll never need to talk to issue an API call to the k8s manager for hardware-related reasons. In such cases, the only time you’re doing a `kubectl apply` is for release management reasons—and it’s nearly impossible, in the general case, to automatically compute a “live migration” between e.g. two different versions of a deployment where the architecture is shaped differently.

(It’s not impossible in specific cases, mind you. I’m still waiting on tenterhooks for the moment someone introduces an Erlang-node operator where you can apply hot-migration relups through k8s itself.)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: