Hacker News new | past | comments | ask | show | jobs | submit login

The reasoning is that it is so easy to make mistakes in performance testing and then draw the wrong conclusion.

As somebody else mentions, this is testing NFSv3 in the VMware setup against local disk in bhyve. While still interesting, it certainly is not a 1 on 1 comparison.

edit: to expand a bit on that.

- the VMware storage appliance gets 24GB, the bhyve config isn't mentioned, but the host OS can access 32 GB.

- as he does mention his test load fits in the ARC, again interesting, but normally you would test real-world loads where your workload does not completely fit in the cache.

Its true that good benchmark is extremely hard to get right specially for complex tools like these or databases. But I honestly think thats the secondary reason, the primary being stopping someone from truly evaluating their products. Essentially you will then have to rely on sales reps and some hearsay from other users and you are less likely to have someone who have actually worked with multiple comparable products in similar use cases.

Sure, I mean, but, anybody can say that. "We'd rather not you publish your review of our product, it's so easy for the reviewer to make a mistake and say something negative about our product. So, you are not allowed to publish any examination of our product."

You ARE allowed to post test results, but they like to review your tests to make sure you didn't make any obvious mistakes.

Quote from the .pdf file higher up in this thread: <quote> You may use the Software to conduct internal performance testing and benchmarking studies, the results of which you (and not unauthorized third parties) may publish or publicly disseminate; provided that VMware has reviewed and approved of the methodology, assumptions and other parameters of the study. Please contact VMware at benchmark@vmware.com to request such review. </ quote>

For example here's a review from storagereview.com on VSAN


"You may publish results which make us look good."

Don't think so if your testing has obvious flaws.

There is clearly a conflict of interest if the producer is the one who gets to decide if your testing has flaws or not. There would be nothing but their own integrity preventing them from simply labeling any test with poor results as being "flawed".

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact