
Benchmarking: Do it right or don't do it at all - manigandham
https://www.mongodb.com/blog/post/benchmarking-do-it-right-or-dont-do-it-at-all
======
brootstrap
I'm sure mongo has some good use cases but so far at my company we have
evaluated and tried mongo for a few things. Never really made it very far.
This article seems kind of petty to me honestly. This is the same mongoDB that
by default left databases open and god knows how many instances out there are
hacked.

The only thing i've ever gotten with mongo is a bunch of marketing BS
honestly. We have worked with some of their sales people and technical people
to try and figure solutions out. It was 100% marketing BS. no thanks for me,
best of luck for you guys.

~~~
PeterZaitsev
I do not think OnGres benchmark was particularly fair, but I think MongoDB's
response was even worse. While OnGres has published all their benchmark
scripts which allowed MongoDB to at least to perform their analyses, they did
not publish the source code of the changes which allowed them to hit such much
better numbers, neither the specific tuning they have used for MongoDB

------
kevincox
The title is a bit rich coming from MongoDB which appears to have gained most
of its initial popularity from very biased benchmarks. (I recall a number that
didn't even wait for the server to ack before considering a write a success.
They were basically benchmarking the client library.)

However their analysis does seem fair that this benchmark was very biased.

------
javiermaestro
As others are pointing out here, Mongodb's reply is definitely questionable,
if only by its tone.

In any case, I was writing this to note that OnGres has replied to Mongo's
reply setting an example of how tech discussions should happen: without
derogatory and arrogant comments, open to valid criticism (i.e. with something
more than words and numbers that cannot be reproduced) and transparency.

Check it out: [https://ongres.com/blog/benchmarking-do-it-with-
transparency...](https://ongres.com/blog/benchmarking-do-it-with-
transparency/)

In there you'll see how Mongo consistently mis-interpreted (or mis-
represented?) the results. They kept mixing the benchmarks and constantly
talked about an experimental driver and missing connection pooling. In fact,
they did use the official Mongo Lua driver _and the official Java driver_ for
different benchmarks and they did some of the benchmarks _with and without_
connection pooling and published both results.

It's really sad to see Mongo reply to a thorough benchmark like this. It
probably has its flaws but instead of correcting them or publishing a better
benchmark like the one they did (to magically get 240x...) they chose to
mischaracterize the work of others, spreading FUD and accusing them of
cheating and being dishonest.

Hopefully they'll turn around and fix it. All it takes is to publish how they
got they amazing numbers so that others can comment, repro or dispute the
benchmark.

------
Too
Where can i find the code for the original vs the corrected benchmark?

As they say themselves: _" They should then make those benchmarks reproducible
and publish their results in full."_

------
purplezooey
Nothing wrong with using "unsupported drivers" when your marketing team is
breathing down your neck to produce some good numbers....

------
kthejoker2
Ouch! But the TLDR is vendor-generated differentiating performance results
aren't worth the bits they're stored on.

------
truth_seeker
Very well put

