Hacker News new | past | comments | ask | show | jobs | submit login

that wasn't always the case. i remember times where pv could be faster on a lot of forks. however you can't compare pv/hvm vs kvm vs hyperv since the difference between pv and hvm is caused by a totally other aspect than the comparsion between hvm, kvm and hyperv. also note hyperv got way better in windows server 2012.

also note that fork() involvs "lots of" io computations which could also be a driver problem and they evolved in newer kernels, too. and also fork() could syntactially be different on hosts with ballooning and hosts without. especially on kvm.

edit: as of today we switched to hvm instances (however we are on many many micros so that performance boost is likewise zero) on our inhouse hardware we are varying between kvm and vmware. however I dislike vmware more and more. vcenter is just unusable (flash) and consumes a shitload of memory. so mostly we use vmware api calls which makes vmware unnecessary somewhat. also we sell our software as a appliance with a xenserver host and mostly we tested our software an all ends and we are mostly disk io driven and that barely changes between them. there are some aspects that could improve performance but that depends more on how you configured your virtual disk.




>that wasn't always the case. i remember times where pv could be faster on a lot of forks

32bit, or 64bit pre-virtualization hardware extension (vt-x/d, ept, amd equivalents) are really the only ways this should have been the case. Each time you make a fork() system call on 64bit PV you're making an extra context switch and flushing the TLB that an HVM instance does not need




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: