That's my attitude as well. RethinkDB, in comparison, had a much better attitude of "reliable first, fast later". Unfortunately, it turned out that when you're a database, it doesn't matter how much data you lose, only how fast you are while losing it.
> MySQL is slow as a dog. MongoDB will run circles around MySQL because MongoDB is web scale.
> "MongoDB does have some impressive benchmarks, but they do some interesting things to get those numbers. For example, when you write to MongoDB, you don't actually write anything. You stage your data to be written at a later time. If there's a problem writing your data, you're fucked. Does that sound like a good design to you?"
> If that's what they need to do to get those kickass benchmarks, then it's a great design.
> "..... If you were stupid enough to totally ignore durability just to get benchmarks, I suggest you pipe your data to /dev/null. It will be very fast."
> If /dev/null is fast and web scale I will use it. Is it web scale?
> "You are kidding me, right? I was making a joke. I mean, if you're happy writing to a database that doesn't give you any idea that your data is actually written just because you want high performance numbers, why not write to /dev/null? It's fast as hell."
Listening to the community and using Postgres is my biggest regret. In hindight, given our scale, any database would have worked. There is no built in solution for high availability with multiple VPS, and having one server alone isn't enough availability for me.
I love RethinkDB. I used it for 5 years at my previous company. It was document oriented, relational, and stable.
I use Postgres JSON blobs right now, but it's odd having a different syntax for keys/values depending on whether they're in a named row or a JSON blob.