EVERY database call should be wrapped in exception handling to make sure that any errors e.g. connection errors are handled appropriately. MongoDB is no different in this case.
You can only handle the errors that you know how to handle, in this case retrying the operation may have created a bigger problem.
It's like, literally, right there in the brief manual. Takes an hour to read and understand.
Perhaps a better option would be to have an 'unsafe_write' option. But then of course, benchmarks would look less impressive which didn't use a function with 'unsafe' in the name.
[Ed: The following is an unusual default requirement]
Me: "MongoDB, did you store what I asked?"
MongoDB: "Nope! Good thing you checked!"
Me: "MongoDB, please store this: ..."
MongoDB: "Okay, I've accepted your request. I'll get around to it eventually. Go about your business, there's no sense in you hanging around here waiting on me."
Or, if you really want to be sure it's done:
Me: "MongoDB, please store this. It's important, so let me know when it's done."
MongoDB: "Sure boss. This'll take me a little bit, but you said it's important, so I'm sure you don't mind waiting. I'll let you know when it's done."
To me, the choice of performance over reliability is the hallmark of mongodb, for better or worse.
That said, I think that people really do overblow the issue and make mountains out of that particular molehill, because all the tools are there to make it do what you want. Many times, it comes down to people expecting that MongoDB will magically conform to their assumptions at the expense of conforming to others' assumptions. Having explicit knowledge of the ground rules for any piece of technology in your stack should be the rule rather than the exception.