> Correctness. We made very strict guarantees, and fulfilled them religiously.
It is true that the user base did not recognize this as important as the RethinkDB team, also because there is part of the development community which is very biased towards this aspect of correctness, at the point that you literally never experienced a problem for years with a system, but then a blog post is released where the system is shown to be not correct under certain assumptions, and you run away. So part of the tech community started to be very biased about the importance of correctness. But 99% of the developers actually running things were not.
Correctness is indeed important, and most developers have indeed less interest that they should have in formal matters that have practical effects. However, I don't think this point can be simply reduced to developers ignorance, that make them just wanting the DB to be fast in micro benchmarks. Another problem is use cases: despite what top 1% people say, many folks have use cases where to be absolutely correct all the time is not as important as raw speed, so the reason they often don't care is that they often can actually avoid caring at all about correctness, as long as things kinda work, while they absolutely need to care about raw speed in order to take things up with a small amount of hardware.
So I suspect that the people who cared about the same things that the team did are also the people who 1) either aren't the ones making the final business decision or 2) aren't in a position where they could or would pay much for it.
But even in those cases, the operation can be retried or the operation could take longer
If you're "Social startup of the year" it doesn't matter if one post appear to some followers 100ms later than they should be.
And for most of those developers, atomicity guarantees (the A from ACID) are very important, because without transactions if that one operation fails, before it can be repeated as you say, the system also has to rewind the previous operations that succeeded in order to leave the data in a consistent state and implementing this rewind by yourself is extremely challenging (take a look at the Command pattern sometime for a possible strategy).
Or in other words, most developers need atomic guarantees / transactions as well and if the DB doesn't support transactions with the needed granularity for your app, it's easier to change the DB than implementing it yourself, which is why RDBMSs are still more popular than NoSQL systems, because they are more general purpose and made the right compromises for a majority of use-cases.
I don't disagree consistency and correctness are important, but it's the same with uptime, where to have it an order of magnitude 'more confidence' you have to spend more time/resources (because even if your DB did everything right, your hard drive could have had a glitch, etc)
There are ways of working around the lack of atomicity, for the cases where you really need that guarantee (I really wouldn't try "writing my own" generic transaction manager, but you can try relying on the atomic operations and limiting changes to small steps - and if you wrote information that is irrelevant now because the operation didn't finish that's fine - you can gc it later)
Now if the operations you need to do are very complex then it is probably better to keep using RDBMSs
That's latency, not correctness. I think even "social startup of the year" cares that the post shows up at all, is attributed to the correct author, isn't corrupted somewhere and so on. Data correctness/consistency is about making sure that the data isn't in an inconsistent state.
That's "eventual consistency" in replication