A few years ago, I had to fix this behavior on Firefox, because it was actually causing data loss. The only techniques I found that seem to work are journaling and rolling backups + transparent recovery (which can still lose data, just one order of magnitude less often).
If you're interested, I wrote about the latter in this blog post: https://dutherenverseauborddelatable.wordpress.com/2014/06/2...
Other developers are currently working on fixing other aspects of Session Restore, including both performance issues and plugging other possible sources of data loss, but I'm not following this closely.
I've heard about lying disks, but do you know of any that are still being sold today? My impression was this isn't really a thing anymore.
I don't think the OS lies if you're sufficiently careful: write the new file, fsync it, fsync the _directory_, then rename. [edit: on OS X, use F_FULLFSYNC. grr.] Now it's guaranteed that the filename has either the old or new contents. Another fsync on the directory to guarantee it's the new. I'd be interested in any evidence to the contrary.
Your technique potentially changes the owner, group, and protection bits of the file, which may come as an unwelcome surprise to anyone trying to use it as a drop-in replacement for directly over-writing the old file. (Worse, there seems to be no way to completely fix this in Unix-land as of the last time I looked back in the 90’s, when we had to pull a feature in FrameMaker over this).
Also, be aware that (many? some?) disk controllers routinely lie about whether their internal caches are actually flushed to physical media, and there’s no way for fsync to be 100% sure that the data has really made it all the way to the spinning platter of rust. Try testing your code by putting it in a loop and repeatedly pulling the power plug out of the wall, and see if you always end up with an uncorrupted file. (Yes, people do this sort of testing on highly fault-tolerant systems.)
I had no idea FrameMaker ran on Unix. Apparently it started life on a Sun-2 workstation, and at one point according to Wikipedia, “FrameMaker ran on more than thirteen UNIX platforms, including NeXT Computer's NeXTSTEP and IBM's AIX operating systems”.
It was interesting to read how the port to Windows affected FrameMaker, and to see how long Solaris remained a supported platform.
I've had "fun" with this on some Windows CE systems, building my own little log-structured database to address the problem. All writes are appends, one disk block at a time, flushing afterwards. Worked well when using TFAT ("transactional" FAT) as the underlying file system, but when tried on a normal PC we discovered a new exciting failure mode: the file ended up with a random chunk of recently-deleted data at the end of it.
The best you can do is assume that anyone who uses your application in production will have enterprise hardware, those (usually) don't lie about fsync (consumer hdd's are somewhat likely to do this)
I've also not yet run across a SSD that lies about fsync.
The disappointing thing is that most filesystems also blindly assume this. Ext4 is somewhat robust by design but I've heard of atleast one time where a (very) cheap consumer harddisk trashed any filesystem on crash, including ZFS to the point of becoming unmountable.
My spelling can go bad when I type a bit fast.
1) If the filename provided is just a filename, then dirname (and parent) may not work as expected. You ought run it through realpath first.
2) What if the parent's parent hasn't been flushed? You need to recurse all the way up to "/".
3) Extending support for other POSIX systems will be tricky. The Austin group (keepers of POSIX development) had trouble with this. Some systems consider using open() on a directory a complete sin (you must use opendir()), some allow it read only but not with write, fsync() may require write permissions instead of just read, and opendir() may not produce an FD which you can ever convince fsync() to operate on (e.g. because no write permission). Indeed, the initial proposal from the Austin group was that all operations on directory entries should be atomic if the inode is also synced (which matches e.g. HPUX, one of the systems which strictly disallows any attempt at calling fsync() on a directory). Many other systems (Linux especially) obviously don't fit that model, it has some unpleasant performance implications, and overall no agreement was ever reached as to how to standardize it.
As for testing, you might look at "Torturing Databases for Fun and Profit" (https://www.usenix.org/conference/osdi14/technical-sessions/...) and "All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications" (https://www.usenix.org/conference/osdi14/technical-sessions/...) for detailed approaches.
Good catch on Apple lying on fsync, btw. Not too many people know that one.
It's really unfortunate the posix link() system call takes a pair of paths. It should really take an inode. Then you could open a temp file and immediately unlink it, and keep writing to it until you are finished (well, you can do this today as well). Then when you're done you could give it a name. If you crashed, you'd leave no detrietus behind.
You can do something like this on Linux for open file descriptors via linkat(..., AT_EMPTY_PATH). However, you need the CAP_DAC_READ_SEARCH capability, which limits its usefulness.
EDIT: Doesn't look like it, gives EXDEV ("invalid cross-device link")
1. Open the file with open(2), passing the O_TMPFILE flag. This creates a temporary, unnamed file. (As if you had opened it, then immediately unlinked it, but atomically.)
2. Write to the file.
3. Link the file into the filesystem with linkat(2), passing the AT_EMPTY_PATH flag (which tells linkat(2) to target a file descriptor that we pass it, not a path.)
(I'm not sure if step 3 helps in the case that you want to atomically replace the contents of an existing named file; I suspect that linkat(2) will error out because the file exists. Perhaps someday linkat(2) will grow a "atomic replace" flag.)
In reality, write() may write a partial amount, so you need to loop and call write() again until the entire file has been written or write() returns an error.
Let's start with the fact that modern disc controllers cache megabytes of data and there exists no way to force them to write it out. There exist a few ways to suggest this. Some will listen, some will lie. SSDs even worse in this respect, they cache hundreds of megabytes, and while they're doing garbage collection, even more data may be touched and modified. Depending on how good their algorithms are, you may lose the data that you are writing if they're powered off suddenly, or entirely unrelated other data, or perhaps a little of each. There is no standard way to tell an SSD to flush its data to actual Flash either, of course. In fact, depending the the logic in the FTL in use, this may not even be possible to do quickly (say, you discover a few bad blocks during garbage collection and need to relocate a lot of data suddenly). This changes even across firmware updates.
And then we arrive at modern file systems. The default mount options generally only sync metadata to disc no matter how many times you call the sync system call. All the dirty pages containing actual file data will be written whenever the system damn well feels like it. And of course, as mentioned above, they're only written to the volatile RAM inside the disk controller.
This is actually an insanely complicated problem. If you want to see how it is solved professionally, take a look at a databases. People who work on sqlite spent countless hundreds of hours making stuff like this work reliably, or break in ways that are predictable and recoverable. Generally this involves extreme amounts of journaling, sometimes multiple levels of journals. Oftentimes there is a whole lot of os-specific quirks handling to fix the differences between what the OSs promise functions do and the reality.
This is actually why, if you ever start needing to "write things reliably", or "store more than a few of something in a file, and update it", and you can spare the code size, just use sqlite for your storage. Because there's a whole lot of problems you're going to hit that they have already solved for you.
"The old name must be the path of an existing
file or directory. The new name must not be
the name of an existing file or directory."
ReplaceFile(tmpFileName, fileName, NULL,
REPLACEFILE_IGNORE_ACL_ERRORS, NULL, NULL) == 0 &&
Why Windows why?
And this might be already the reason. Win32 is geared towards the broad range of developers whereas Unix comes more from systems thinking.
(I see that also provides MOVEFILE_DELAY_UNTIL_REBOOT as a workaround for executable images that are in use)
On Windows even if you use native API to open a file in a write-through mode with no caching (FILE_FLAG_WRITE_THROUGH | FILE_FLAG_NO_BUFFERING), you will still end up with zero-filled files if the machine is power cycled or blue-screened in the right moment. Disk controllers cache aggressively and there's not much an OS can do about it.
You'll need to patch your kernel and install the mkfs.resier* util. Take a look at this Gentoo-forums tweaks post for troubleshooting and performance tweaks. If you find reiser is lagging, your filesystem was likely built incorrectly and you'll need to run a simple
fsck.reiser4 -y --fix --build-fs --build-sb
I’m thinking there may exist some lower level magic api with e.g CoW pages that lets the caller appear to copy the original without actually doing it? “Shadow copy” or similar?
if rollback log exists
for each write(),
write old contents of region to log (along w/ location)
for each write(),
write new bits
sync db file
I might add something like this in the future.