That assumes your data can be written in a single write(2) call.
I don't know what size record is guaranteed to be writable that way, if any, but I wouldn't assume it's infinity. Short writes aren't illegal. Perhaps you were pointing at this, but then you need to work around the limitations of the OS, and...
They aren't, but in practice I don't think they can happen in the scenarios I looked at (= BSD and Linux writes to "normal" file systems backed by mass storage) - unless you run out of disk space or your disk has some low-level error.
But, again, the problem is that the API does not provide this guarantee. I think it's generally true in practice, but… you can't rely on it.
(My context was a thread-capable lock-free logging backend; we just call writev() with the entire log message at once. Of course there's still locking to do this in the kernel, but that's always there — rather deal with one lock than two.)
Honestly that's the worst of all worlds: Behaviour that's reliable, but not guaranteed.
People will definitely start to rely on it, and then you can't change it later when it makes sense, which means the relevant APIs can't be reworked to better treat disks as the asynchronous network devices they nowadays are. Even though the APIs are flexible enough (per documentation) to manage that.
On the flip side, when you're the developer, you can't rely on the behaviour without feeling dirty, and/or needing separate codepaths per OS because this isn't specified, and it's a bad idea to rely on if you aren't absolutely sure you're on Linux.
And then there's the fact that, while write(2) on O_APPEND is certainly atomic in most cases... I simply can't trust that's true for arbitrarily large writes, because it certainly isn't specified; it might do short writes above 64k, or 64M, or something. So I'll need an error-handling path of some sort anyway.
I don't think that write on a actual file can randomly return a partial write, unless an actual error has occurred (out of disk space, segfault on the buffer, i/o error, or an interrupt). These are all things that are either under the application control or would need to handled anyway.
I thought it could happen if you received a signal but apparently on linux signals won't interrupt file i/o. which brings back memories of processes getting stuck doing NFS I/O.
I don't know what size record is guaranteed to be writable that way, if any, but I wouldn't assume it's infinity. Short writes aren't illegal. Perhaps you were pointing at this, but then you need to work around the limitations of the OS, and...
Why can't we just have transactions?