> In particular, a database will not have the following default behavior:
> - return from a write call silently, when the data wasn't written and will not be
There isn't some defined set of rules for how a database should operate. This attitude implies that an asynchronous database should never ever exist. If that's the case, how could I ever use a database for HTTP logging? I can't have every single HTTP request block on a database write, that's absurd. HTTP logging is impossible with MySQL or PostgreSQL for exactly this reason.
PostgreSQL allows you to select this behavior on a per-transaction basis using the synchronous_commit variable: the default (which what we are discussing here) is "don't return until the data hits disk", but you can set it as strict as "don't return until the data has not only hit a local disk but has been acknowledged by a standby slave" or as lax as "return immediately: sync my data when you get around to it, it isn't important"; (so, don't claim something is impossible with someone's tool without first looking into it deeply).
You're right, that's my fault. You can set synchronous_commit to off, which makes this possible. However, this is a database wide variable, so you can't use this database server for anything else, presumably, if you set synchronous_commit to off.
Again, no: it is per each and every individual transaction. (edit:) To be very clear, this means that a single HTTP log table in a single database could have some requests marked synchronous_commit (as they are to a resource that you charge for, and for which you need accurate logs), while for others it is not set (as you just want the fastest performance).