Used Kubevirt for a while in my homelab, for my netbsd/tribblix/omnios vms, but in the end decided to switch over to hashicorps nomad for that purpose.
Kubevirt was doing alright, but it always felt like it had too many bells and whistles. Networking was an issue too, at least for me.
In the end nomad took over and i had great experience, with it. Consul + Nomad is an amazing synergy. Everything just works, but you still have control over everything and your installation does not feel like a blackbox that just tells you to eat dirt sometimes. Deploying vm's with nomad&consul agents inside + and running zones [0] on vm's was not trivial (mostly networking issues again), but it was actually fun to do. Which i guess is the reason i stuck with nomad.
One thing that nomad is missing is better CSI implementation. They are getting better at it, but back then it was an issue. Documentation in general is not that great compared to k8s, gladly there is a gitter community that can is very active and dedicated. Still not having to mess with piles and piles of yaml feels great.
Currently i have downsized my homelab to a few droplets and am running mostly podman workloads, but i would recommend nomad to anyone who wants to have fun.
tldr: Kubevirt is a great product, but nomad does the same thing and is fun to work with.
Yeah, you are right. Live migration is not something that i ever tried to use/implement with nomad. But i would still say that nomad will get you 80% of the way without much struggle. Hopefully this feature comes in the nearest future.
Of course, but HCL imo is a much better alternative when compared to how yaml is used in k8s.
With nomad resulting HCL code even in unrendered state is a much more readable when compared to most k8s manifests.
With k8s yaml at some point becomes cumbersome and clunky to use. There might be some practices regarding writing manifests, that i am unaware of that alleviate these issues, but atm i would prefer working with HCL over yaml.
And regarding consul. K8s and consul use go templating, which in general feels ok.
I think KubeVirt fits better in an environment that is already running Kubernetes. We have found it useful for running some of our heavyweight CI jobs that have virtualization requirements because we can manage the VMs in the same way that we manage our containers.
In general though we have only used KubeVirt for ephemeral workloads.
In the end you need an infrastructure where you run your kubernetes cluster on. Haven't used it yet but how I understand it, it jut provides you the kubernetes experience to provision VMs.
It seems like the VMs are actually running in a container which in the end runs on a kubernetes node.
Wow Rancher is just a juggernaut. I'm always impressed by them. I would be curious to hear your or anyone else's experience using Harvester and/or K3OS.
One of the things I am excited about is microvm orchestration with kubernetes. Weavework has a really cool project in that realm [1].
[1] https://github.com/weaveworks/ignite