Hacker News new | past | comments | ask | show | jobs | submit login
VMware vs. bhyve Performance Comparison (b3n.org)
62 points by adamnemecek on Aug 21, 2015 | hide | past | favorite | 32 comments



VMWare's EULAs prohibit publishing of benchmarks without approval, so this is interesting to see...

https://www.vmware.com/pdf/benchmarking_approval_process.pdf


I have a hard time understanding the reasoning for this. What argument do they have as to why people shouldn't know about the performance?


As wila noted, benchmarks are easy to get wrong. It also helps to remember that VMware has been in the enterprise space for a long time and a lot of people in that field have been burned by benchmarketing where a competitor would run an ad campaign hoping nobody would notice that their big performance lead was due to misconfiguration[1].

Even now, where it's easier to check the details, it's fairly common to see things like the HTTP/2 demo from yesterday (https://news.ycombinator.com/item?id=10091689) which overstated gains by misconfiguring the HTTP/1.1 server used for comparison. That wasn't too bad because both were their own servers but if you imagine that they were comparing their shiny new server to e.g. Apache or IIS that way and you'd have the average late-90s web server sales pitch.

I don't like these terms but it's easy to understand why they became so common.

1. Stuff like different defaults for fsync, “accidentally” using different block sizes on a RAID array / filesystem / NFS mount, forgetting to use the same write-back/through settings on both, etc.


The reasoning is that it is so easy to make mistakes in performance testing and then draw the wrong conclusion.

As somebody else mentions, this is testing NFSv3 in the VMware setup against local disk in bhyve. While still interesting, it certainly is not a 1 on 1 comparison.

edit: to expand a bit on that.

- the VMware storage appliance gets 24GB, the bhyve config isn't mentioned, but the host OS can access 32 GB.

- as he does mention his test load fits in the ARC, again interesting, but normally you would test real-world loads where your workload does not completely fit in the cache.


Its true that good benchmark is extremely hard to get right specially for complex tools like these or databases. But I honestly think thats the secondary reason, the primary being stopping someone from truly evaluating their products. Essentially you will then have to rely on sales reps and some hearsay from other users and you are less likely to have someone who have actually worked with multiple comparable products in similar use cases.


Sure, I mean, but, anybody can say that. "We'd rather not you publish your review of our product, it's so easy for the reviewer to make a mistake and say something negative about our product. So, you are not allowed to publish any examination of our product."


You ARE allowed to post test results, but they like to review your tests to make sure you didn't make any obvious mistakes.

Quote from the .pdf file higher up in this thread: <quote> You may use the Software to conduct internal performance testing and benchmarking studies, the results of which you (and not unauthorized third parties) may publish or publicly disseminate; provided that VMware has reviewed and approved of the methodology, assumptions and other parameters of the study. Please contact VMware at benchmark@vmware.com to request such review. </ quote>

For example here's a review from storagereview.com on VSAN

http://www.storagereview.com/vmware_virtual_san_review_overv...


"You may publish results which make us look good."


Don't think so if your testing has obvious flaws.


There is clearly a conflict of interest if the producer is the one who gets to decide if your testing has flaws or not. There would be nothing but their own integrity preventing them from simply labeling any test with poor results as being "flawed".


Very similar reasons for why you're not allowed to publish benchmarks for Oracle products as well. It's just plain bad PR at a point if someone does a terrible job. On the other hand, it also means that if you're an unscrupulous vendor, you can just squash benchmarks that do not make you look better than your competition. The lack of information available in general is what is frustrating for many folks though even if it's completely anecdotal evidence.


Because VMware is generally quite slow. However it is very easy to manage large estates of machines using the out of the box tools.

Unlike Xen or KVM


oVirt would like a word with you.


and proxmox


Just as almost all commercial DBMS vendors do.


How is that enforceable?


They can revoke your license to use the product. You break the EULA, you are no longer licensed to run vmware.


I published a few benchmarks in the past, a product manager and many senior technical people at VMware read it and nobody complained. I don't think they even know about this EULA.


So, the benchmark shows that NFS is slower than direct disk access. Is anyone surprised?

The bit about VMware and bHyve is irrelevant here. If you wanted to compare the hypervisors performance, you would do it by using a local disk in VMware, or using NFS disks from bHyve. As it is, it looks like someone just really wanted an opportunity to advertise bHyve.

Disclaimer: I use bHyve myself, and I think it's pretty good for what it does.


...but also much faster than normal/native shared folders: http://www.midwesternmac.com/blogs/jeff-geerling/vagrant-vmw...

NFS can be useful in many scenarios, but using native FS (or using rsync to sync instead of mount a shared folder) is always going to win that battle.


Very interesting. Hope to see OS X xhyve to get more adoption from tools https://github.com/mist64/xhyve


I would suggest that this is apples to oranges.

When you have VMware vcenter, the power is not in the inherent speed (its not the fastest) its the simplicity of creating a cluster.

If you have a network file store, its almost always going to be slower than local storage (assuming using the same hardware) thats not the point. Using a network file system/ block storage allows you to have HA disks. (or Load balancing, or much larger storage, or wider performance)

With pNFS you can have Active-Active with n servers (assuming you're using Netapp(yes they are still relevant), or GPFS)

As for vcenter, creating a HA cluster with ten hosts takes about 3 hours from scratch (assuming you install all ten servers at once.) Thats using the default tools, and the only thing that's scripted would be the install. (also using the vcenter appliance, not the shitty windows thing)

KVM whilst much faster is so very much harder to set up, monitor and operate. Yes you can use config management, but with vmware, you don't need it. (cough, ignoring machine profiles)


the power is not in the inherent speed (its not the fastest)

You compared the speed yourself in the sentence immediately following "apples to oranges", so I assume it's actually "apples to apples", but your point is that VMware is better despite being slower.


> KVM whilst much faster is so very much harder to set up, monitor and operate.

At the risk of repeating myself: oVirt would like a word with you.


I'd like to also see Linux KVM vs ESXi vs Bhyve benchmarks, but excellent article! Good job pissing all over their EULA! :-)

Arguably where VMWare shines is management tools and so on, not exactly performance - this is kind of widely known, anecdotally at least and probably the number one reason they forbid benchmarks in the EULA. They know they're going to suck. :-)

Big corps with big money don't care that much about performance, they're more interested in ease of use, interoperability, SLAs, support, integration with other enterprise crap etc.


At VMWare, I was working on ESX performance for a while, and I can say that performance is taken very seriously. Major features have been pushed back to the drawing board when the performance was not up to the mark. The handle on performance of ESX is tight.

Also, If I may say so, ESXi is one of the most well written pieces of system software to date.


As a former VMware employee I would agree with your sentiment, excellent hypervisor.


> Arguably where VMWare shines is management tools and so on, not exactly performance - this is kind of widely known, anecdotally at least and probably the number one reason they forbid benchmarks in the EULA.

Or maybe because people would publish "benchmarks" like this where they are comparing storage performance of a third party NFS solution (FreeNAS) running as a VMware VM and compare that to a solution using local disks and claim that is a useful comparison…


Big corps with big money don't care that much about performance

That's just not true. In fact, they often care more than small companies because of how expensive it is to get, say, 10% more performance out of a 4- or 5-Nines production environment.


Ironic that I just posted VMware vs VirtualBox performance comparison earlier this morning: http://www.midwesternmac.com/blogs/jeff-geerling/vagrant-vmw...


That was a pretty good benchmark article, thanks!


Very interesting.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: