(in reference to I’ll Give MongoDB Another Try. In Ten Years, http://diegobasch.com/ill-give-mongodb-another-try-in-ten-ye...)
I read the referred article and the HN discussion...but wasn't quite able to tell how much of the debate was focused on the reasonability of this default rather than "developers should read the docs, or accept catastrophic failure" attitude (which is not inherently wrong).
What immediately comes to mind is the Rails ActiveRecord update_attributes vulnerability...the default was to allow the updating of all specified attributes with the assumption that no competent developer would trust unsanitized input from the browser. After a good Samaritan performed a spectacular hack on Github, the Rails team immediately changed the default.
Is that the situation here with the 32-bit silent fail default? That it is a sensible default, but could be changed if it's shown that competent devs will nonetheless screw it up?
It's kind of sad that we'd need an example to show that it really could happen in every single specific case. It should be common knowledge by now that competent devs screw things up all the time.
For example, I'm sure the MongoDB devs are extremely competent. But all the same, having a database management system default to letting writes fail silently is a pretty spectacular screw-up. I can't really blame other competent devs for taking it for granted that a DBMS wouldn't do something like that.