Hacker News new | past | comments | ask | show | jobs | submit login

Oh my god, thank you. I've been trying to figure out why my VM to VM bandwidth is capped at 30Gbit. I'm using multi-threaded iperf to benchmark, so it doesn't seem to be a data generation or consumption bottleneck. I'm going to have to do a bit more experimenting.

If both VMs are on the same host, is there any way to essentially achieve RDMA? VM1 says to VM2, "It's in memory at this location", and VM2 just reads directly from that memory location without a copy by the CPU?

I'm no expert, obviously, but I fail to see why VM to VM memory operations should be slower than RAM sans some latency increase due to setting up the operation.




There is something like this for QEMU on Linux hosts. It's called "Inter VM SHared MEMory" (IVSHMEM) [1].

Hyper-V has something akin to this that they call "Hyper-V sockets" [2]. But it seems it only works between guest and host.

[1] https://www.qemu.org/docs/master/system/devices/ivshmem.html

[2] https://learn.microsoft.com/en-us/virtualization/hyper-v-on-...


I think it should be doable, because my BIOS has an option to enable/disable RDMA under SR-IOV. I've not tried messing with it yet though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: