Hacker News new | past | comments | ask | show | jobs | submit login

I wonder how it copes with things like anti-affinity rules, where you don't want two things running on the same physical / virtual server for resilience reasons.



You wouldn’t use affinity rules anymore. The pods are scheduled on a single virtual-kubelet node, so if you use anti-affinity scheduling would fail.


> You wouldn’t use affinity rules anymore

Point being: what if I wanted to do this? How could I achieve making sure services were running according to the antiaffinity rules I provided? E.g. not on same physical machine; not on same VM; not in same datacentre; not in same region; etc.


If there were a virtual kubelet per unit of granularity (datacenter, in their case?) then you would be able to use affinity rules just fine.


Right. Though, the virtual-kublets can be running on the same machine actually. They just need to be configured to have different node names.

The press release states that your k8s API is actually running on a single machine with k3s and a virtual-kubelet. So, I’m not sure if it’s one “cluster” per region, or one “cluster” with multiple virtual-kubelets for regions.

Either way, your FKS cluster control-plane would sit in a single region.


How do you forbid running two instances of the same service on one node without anti affinity?


Traditionally, each node is its own machine. virtual-kubelet creates a virtual node that is a proxy to some other pod infrastructure. In the case with FKS, each pod in the virtual node is a machine (a node in the traditional sense), so it’s equivalent of having an anti-affinity on all pods with an infinite node pool.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: