I wonder how it copes with things like anti-affinity rules, where you don't want two things running on the same physical / virtual server for resilience reasons.
Point being: what if I wanted to do this? How could I achieve making sure services were running according to the antiaffinity rules I provided? E.g. not on same physical machine; not on same VM; not in same datacentre; not in same region; etc.
Right. Though, the virtual-kublets can be running on the same machine actually. They just need to be configured to have different node names.
The press release states that your k8s API is actually running on a single machine with k3s and a virtual-kubelet. So, I’m not sure if it’s one “cluster” per region, or one “cluster” with multiple virtual-kubelets for regions.
Either way, your FKS cluster control-plane would sit in a single region.
Traditionally, each node is its own machine. virtual-kubelet creates a virtual node that is a proxy to some other pod infrastructure. In the case with FKS, each pod in the virtual node is a machine (a node in the traditional sense), so it’s equivalent of having an anti-affinity on all pods with an infinite node pool.