Hacker News new | past | comments | ask | show | jobs | submit login
Apple's custom NVMes are amazingly fast – if you don't care about data integrity (twitter.com/marcan42)
656 points by omnibrain on Feb 17, 2022 | hide | past | favorite | 359 comments



This F_FULLFSYNC behaviour has been like this on OSX for as long as I can remember. It is a hint to ensures that the data in the write buffer has been flushed to stable storage - this is historically a limitation of fsync that is being accounted for - are you 1000% sure it does as you expect on other OSes?

POSIX spec says no: https://pubs.opengroup.org/onlinepubs/9699919799/functions/f...

Maybe unrealistic expectation for all OSes to behave like linux.

Maybe linux fsync is more like F_BARRIERFSYNC than F_FULLFSYNC. You can retry with those for your benchmarks.

Also note that 3rd party drives are known to ignore F_FULLFSYNC, which is why there is an approved list of drives for mac pros. This could explain why you are seeing different figures if you are supplying F_FULLFSYNC in your benchmarks using those 3rd party drives.


Yes. fsync() on Linux pushes down to stable storage, not just drive cache.

OpenBSD, though, apparently behaves like macOS. I'm not sure I like that.


Last time I checked (which is a while at this point, pre SSD) nearly all consumer drives and even most enterprise drives would lie in response to commands to flush the drive cache. Working on a storage appliance at the time, the specifics of a major drive manufacturer's secret SCSI vendor page knock to actually flush their cache was one of the things on their deepest NDAs. Apparently ignoring cache flushing was so ubiquitous that any drive manufacturer looking to have correct semantics would take a beating in benchmarks and lose marketshare. : \

So, as of about 2014, any difference here not being backed by per manufacturer secret knocks or NDAed, one-off drive firmware was just a magic show, with perhaps Linux at least being able to say "hey, at least the kernel tried and it's not our fault". The cynic in me thinks that the BSDs continuing to define fsync() as only hitting the drive cache is to keep a semantically clean pathway for "actually flush" for storage appliance vendors to stick on the side of their kernels that they can't upstream because of the NDAs. A sort of dotted line around missing functionality that is obvious 'if you know to look for it'.

It wouldn't surprise me at all if Apple's NVME controller is the only drive you can easily put your hands on that actually does the correct things on flush, since they're pretty much the only ones without the perverse market pressure to intentionally not implement it correctly.

Since this is getting updoots: Sort of in defense of the drive manufacturers (or at least stating one of the defenses I heard), they try to spec out the capacitance on the drive so that when the controller gets a power loss NMI, they generally have enough time to flush then. That always seemed like a stretch for spinning rust (the drive motor itself was quite a chonker in the watt/ms range being talked about particularly considering seeks are in the 100ms range to start with, but also they have pretty big electrolytic caps on spinning rust so maybe they can go longer?), but this might be less of a white lie for SSDs. If they can stay up for 200ms after power loss, I can maybe see them being able to flush cache. Gods help those HMB drives though, I don't know how you'd guarantee access to the host memory used for cache on power loss without a full system approach to what power loss looks like.


Flush with other vendors at least does something as they block for some time too, just not as long as Apple.

Apple implementation is weird because actual amount of data written doesn't seem to affect flush time.


On at least one drive I saw, the flush command was instead interpreted as a barrier to commands being committed to the log in controller DRAM, which could cut into parallelization, and therefore throughput, looking like a latency spike but not a flush out of the cache.


My test is single threaded, and thus has no parallelism to begin with.


The drive controller is internally parallel. The write is just a job queue submission, so the next write hits while it's still processing previous requests.


People have tested this stuff on storage devices with torture tests. Can you point at an example of a modern (directly attached) NVMe drive from a reputable vendor that cheats at this?

FWIW, macOS also has F_BARRIERFSYNC, which is still much slower than full syncs on the competition.


On my benchmarking of some consumer HDD's, back in 2013 or so, the flush time was always what you'd expect based on the drive's RPM. I got no evidence the drive was lying to me. These were all 2.5" drives.

My understanding was, the capacitor thing on HDD's is to ensure it completely writes out a whole sector, so it passes the checksum. I only heard the flush cache thing with respect to enterprise SSD's. But I haven't been staying on top of things.


You can't base spinning rust flushes on RPM; the seek arm is what dominates.


Not at all, and it certainly wasn't relevant to the benchmarking I was doing, as I was focusing on writes to and fsync performance for the same track.


You definitely weren't testing the cache in a meaningful way if you were hovering over the same track.

WRT to the capacitor thing being about a single sector, think about the time spans. You should be able to even cut out the drive motor power, and still stay up for 100s of ms. In that time you can seek to a config track and blit out the whole cache. If you're already writing a sector you'll be done in microseconds. The whole track spins around every ~8ms at 7200RPM.


Tangential thinking out loud: this makes me think of a sort of interleaving or striping mechanism that tries to leave a small proportion of every track empty, such that ideal power loss flush scenarios would involve simply waiting for the disk to spin around to the empty/reserved area in the current track. On drives that aren't completely full, it's probably statistically reasonable that for any given track position there's going to be a track with some reserved space very close by, such that the amount of movement/power needed to seek there is smaller.

Of course, this approach describes a completely inverted complexity scenario in terms of sector remapping management, with the size of the associated tables probably being orders of magnitude larger. :<

Now I wonder how much power is needed for flash writes. The chances are an optimal-and-viable strategy would probably involve a bit of multichannel flash on the controller (and some FEC because why not).

Oooh... I just realized things'll get interesting if the non-volatile RAM thing moves beyond the vaporware stage before HDDs become irrelevant. Last-millimeter write caching will basically cease to be a concern.

But thinking about the problem slightly more laterally, I don't understand why nobody's made inline SATA adapters with RAM, batteries and some flash in them. If they intercept all writes they can remember what blocks made it to the disk, then flush anything in the flash at next power on. Surely this could be made both solidly/efficiently and cheaply...?


> But thinking about the problem slightly more laterally, I don't understand why nobody's made inline SATA adapters with RAM, batteries and some flash in them.

Hardware raid controllers with Battery Backup Units was really popular starting in the mid 90’s until maybe mid 2010’s? Software caught up in a lot of features and batteries failed often and required a lot more maintenance. Super caps were to replace the batteries but I think SSDs and software negated a ton of the value add. You can still buy them but they’re pretty rare to see in the wild.


I've heard of those, they sound mildly interesting to play with, if just to go "huh" at and move on. I get the impression the main reason they developed a... strained reputation was their strong tendency to want to do RAID things (involving custom metadata and other proprietaryness) even for single disks, making data recovery scenarios that much more complicated and stressful if it hadn't been turned off. That's my naive projection though, I (obviously) have no real-world experience with these cards, I just knew to steer far away from them (heh)

An inline widget (SATA on both sides) that just implements a write cache and state machine ("push this data to these blocks on next power on") seems so much simpler. You could even have one for each disk and connect to a straightforward RAID/SAS controller. (Hmm, and if you externalize the battery component, you could have one battery connect to several units...)

You are indeed right about the battery/capacitor situation ("you have to open the case?!"), I wouldn't be surprised if the battery level reporting in those RAID cards was far from ideal too lol

With all this being said, a UPS is by far the simplest solution, naturally, but also the most transiently expensive.


So its basically implementation specific, and macOS has its own way of handling it.

That doesnt make it worse - in fact it permits the flexibility you are now struggling with.

edit: downvotes for truth? nice. go read the posix spec then come back and remove your downvotes...


Probably more like downvoted because missing the point.

Sure fsync allows that behavior, but also it's so widely misunderstood that a lot of programs which should do a "full" flush only do a fsync, including Benchmarks. In which case they are not comparable and doing so is cheating.

But that's not the point!

The point is that with the M1 Macs SSDs the performance with fully flushing to disk is abysmal bad.

And as such any application with cares for data integrity and does a full flush can expect noticable performance degradation.

The fact that Apple neither forces frequent full syncs or at least full syncs when a Application is closed doesn't make it better.

Though it is also not surprising as it's not the first time Apple set things up under the assumption their hardware is unfailable.

And maybe for a desktop focused high end designs where most devices sold are battery powered that is a reasonable design choice.


"And maybe for a desktop focused high end designs where most devices sold are battery powered that is a reasonable design choice"

Does the battery last forever? Do they never shut down from overheating, shut down from being too cold, freeze up, they are water and coffee proof?

Talk to anyone that repairs mac about how high-end and reliable their designs trully are - they are better than bottomn of the barrel craptops, sure, but not particularly amazing and have some astounding design flaws.


As the article points out, a lot of those cases can be detected with advanced notice (dying battery, and overheating - probably even being too cold). In those cases the OS makes sure all the caches are flushed.

Spilled drinks are a viable cause for concern, but if they do enough damage to cause an unexpected shutdown, you've probably got bigger issues than unflushed cache.


"In those cases the OS makes sure all the caches are flushed."

I have never heard of this functionality being included in any major OS, could you please provide some reference to this being documented?


I think that's misleading.

On many laptops even with water damage you can recover your local data fully, not do for Macs (for more reasons then just data loss/corruption due to non flushing).

Especially if you are already in a bad situation you don't want your OS to make it worse.


How cold is too cold for a computer?


The CPU can't possibly get too cold. See for example overclocking performed by cooling the CPU with liquid nitrogen. Condensation is a factor as is lost of ductility of plastic at low temp making it brittle. Expansion and contraction of materials especially when different materials expand to different degrees.


"The CPU can't possibly get too cold" - Untrue. There are plenty of chips with what overclockers like to call "cold bugs".

Sequential logic (flipflops) has a setup time requirement. This means the combinatorial computation between any two connected pairs of flops (output of flop A to input of flop B) has to do its job fast enough such that the input of B stops toggling some amount of time before the next clock edge arrives at the flipflop. Violate that timing, and B will sometimes sample the wrong value, leading to an error.

Setup time is what most people are thinking about when they use LN2 or other exotic forms of cooling. By cooling things down, you usually improve the performance of combinatorial logic, which provides more setup time margin, allowing you to increase clock speed until setup time margin is small again.

But flops also have hold time requirements - their inputs have to remain stable for some amount of time after the clock edge, not just before. It's here where we can run into problems if the circuit is too cold. Imagine a path with relatively little combinatorial logic, and not much wire delay. If you make that path too fast, it might start violating hold time on the destination flop. Boom, shit doesn't work.


Many phones, laptops cameras and similar are only guaranteeing functionally by above 0 degree....

Luckily they often operate in lower temperatures too, but not seldomly by hoping they don't get cooled that much themself (because they are e.g. in your pocket).


0 degree what?

Fahrenheit, Celsius, horizontal angle?


The biggest thing is the battery. The CPU doesnt get too cold, but batteries degrade or stop performing when they get too cold.

Edit: For actual temperatures, in my experience its when the device is in use for a sustained amount of time in under 10f weather


Incidentally CPUs do get too cold, not at a reasonable temperature, but sufficiently low temperatures do change the characteristics of semi conductors. Not something to worry about if you're not using liquid nitrogen (or colder).


I've had my phone shut off on me from being out in the Chicago cold for a couple hours. Battery over 50% when I brought it back inside and warmed it up.


If i go ousode in winter, the bsttery dies around zero degrees. Keep in mind that you laptop could be in a bag in sleep mode or idle


I mean the Apple hardware in question is usually a laptop, which has its own very well instrumented battery backup. In most cases the hardware knows well in advance if the battery is gonna run dry.

And yes the hardware is failable. But the kind if failure that would cause the device to completely lose power is extremely rare. The OS has many chances to take the hint and flush the cache before powering down.

Note: this is pure conjecture.


> The point is that with the M1 Macs SSDs the performance with fully flushing to disk is abysmal bad.

How sure are we the drives that flush caches more quickly are actually flushing the caches?


Good Point.

A simple test can be to see the degree of dataloss you can occur with a hard power off.

I think the author did that test for M1 Mac but idk. if they did the test with the other laptops.

But then the M1 Mac is slower when flushing then most SSDs out there and even some HDDs. I think if most SSDs wouldn't flush data at all we would know of that and I should have run into problems with the few docent hard resets I ran into in the last few years. (And sure there are probably some SSDs which cheap out on cache flushing in a dangerous way, but most shouldn't as far as I can tell).


We’d see data loss only if the power loss or hard reset happened before the data is actually flushed. After the data is accepted into the buffer there would be a narrow time window when it could occur. Also, a hard reset on the computer side may not be reflected on the storage embedded electronics.


What is worse is their NVMe controller having 50x worse flush performance than the competition.


The competitions controller may be ignoring the F_FULLFSYNC. This is a known issue which is why apple have approved vendors for mac pro drives.


It isn't, because otherwise it would be showing the ~same performance with and without sync commands, as I showed in the thread. There is a significant performance loss for every drive, but Apple's is way worse.

There is no real excuse for a single sector write to take ~20ms to flush to NAND, all the while the NAND controller is generating some 10MB/s of DRAM traffic. This is a dumb firmware design issue.


It may be interpreting it differently. You arent comparing apples to apples, quite literally.

Why not compare macOS and linux on approved x86 mac hardware. i.e. fusion drive or whatever.

Also, as suggested - try F_BARRIERFSYNC, which flushes anything before the barrier (used for WAL IIRC).


This affects T2 Macs too, which use the same NVMe controller design as M1 Macs.

We've looked at NVMe command traces from running macOS under a transparent hypervisor. We've issued NVMe commands outside of Linux from a bare-metal environment. The 20ms flush penalty is there for Apple's NVMe implementation. It's not some OS thing. And other drives don't have it. And I checked and Apple's NVMe controller is doing 10MB/s of DRAM memory traffic when issued flushes, for some reason (yes, we can get those stats). And we know macOS does not properly flush with just fsync() because it actively loses data on hard shutdowns. We've been fighting this issue for a while now, it's just that it only just hit us yesterday/today that there is no magic in macOS - it just doesn't flush, and doesn't guarantee data persistence, on fsync().


Ive just been scanning through linux kernel code (inc ext4). Are you sure that its not issuing a PREFLUSH? What are your barrier options on the mount? I think you will find these are going to be more like F_BARRIERFSYNC.

I couldnt find much info about it - but the official docs are here: https://kernel.org/doc/html/v5.17-rc3/block/writeback_cache_...


Those are Linux concepts. What you're looking for is the actual NVMe commands. There's two things: FLUSH (which flushes the whole cache), and a WRITE with the FUA bit set (which basically turns that write into write-through, but does not guarantee anything about other commands). The latter isn't very useful for most cases, since you usually want at least barrier semantics if not a full flush for previously completed writes. And that leaves you with FLUSH. Which is the one that takes 20ms on these drives.


> Those are Linux concepts. What you're looking for is the actual NVMe commands.

Im not sure what commands are being sent to the NVMe drive. But what you are describing as a flush would be F_BARRIERFSYNC - NOT the F_FULLFSYNC which youve been benchmarking.


Sigh, no. A barrier is not a full flush. A barrier does not guarantee data persistence, it guarantees write ordering. A barrier will not make sure the data hits disk and is not lost on power failure. It just makes sure that subsequent data won't show up and not the prior data, on power failure. NVMe doesn't even have a concept of barriers in this sense. An OS-level barrier can be faster than a full sync only because it doesn't need to wait for the FLUSH to actually complete, it can just maintain a concept of ordering within the OS and make sure it is maintained with interleaved FLUSH calls.

I don't know why you keep pressing on this issue. macOS has the same performance with F_FULLFSYNC as Linux does with fsync(). Why would they be different things? We're getting the same numbers. This entire thing started because fsync() on these Macs on Linux was dog slow and we couldn't figure out why macOS was fast. Then we found F_FULLFSYNC which has the same semantics as fsync() on Linux. And now both OSes perform equally slowly on this hardware. They're obviously doing the same thing. And the same thing on Linux on non-Apple SSDs is faster. I'm sure I could install macOS on this x86 iMac again and show you how F_FULLFSYNC on macOS also gives better performance on this WD drive than on the M1, but honestly, I don't have the time for that, the isssue has been thoroughly proved already.

Actually, I have a better one that won't waste as much of my time.

Plugs in a shitty USB3 flash drive into the M1.

224 IOPS with F_FULLFSYNC. On a shitty flash drive. 58 IOPS with F_FULLFSYNC. On internal NVMe.

Both FAT32.

Are you convinced there's a problem yet?

(I'm pretty sure the USB flash drive has no write cache, so of course it is equally fast/slow with just fsync(), but my point still stands - committing writes to persistent storage is slower on this NVMe controller than on a random USB drive)


OK - thanks for humouring me marcan. Sorry to waste your time. Clearly something is not right here.


Thank you, you've made this very clear for me.


It seems to be pretty apples to apples, they're running the same benchmark using equivalent data storage APIs on both systems. What are you thinking might be different? The Linux+WD drive isn't making the data durable? Or that OSX does something stupid which could be the cause of the slowdown rather than the drive? Both seem implausible.


I downvoted you because you complained about your downvotes.


How did that work out for you?


Something that is not quite clear to me yet (I did read the discussion below, thank you Hector for indulging us, very informative): isn't the end behaviour up to the drive controller? That is, how can we be sure that Linux actually does push to the storage or is it possible that the controller cheats? For example, you mention the USB drive test on a Mac — how can we know that the USB stick controller actually does the full flush?

Regardless, I certainly agree that the performance hit seems excessive. Hopefully it's just an algorithm, issue and Apple can fix this with a software update.


*BSDs mostly followed this semantic, as I recall. Probably inherited from a common ancestor.


MacOS was really just FreeBSD with a fancier UI. Not sure what is the behavior now, but I'm pretty sure FreeBSD behaved almost exactly the same as a power loss rendered my system unbootable over 10 years ago.


>MacOS was really just FreeBSD with a fancier UI.

I'm sorry but this is incorrect. NeXTSTEP was the primary foundation for Mac OS X, and the XNU kernel was derived from Mach and IIRC 4.4BSD. FreeBSD source was certainly an important sync jumping off point for a number of Unix components of the kernel and CLI userland, there was some code sharing going on for a while (still?), but large components of the kernel and core frameworks were unique (for better or worse).


> and IIRC 4.4BSD

4.3, only Rhapsody incorporated elements from 4.4, but that was the tail end of nextstep, essentially the initial preview of macos (it was released as osx server 1.0, then forked to darwin from which the actual OSX 10.0 would be built, two major pieces missing from rhapody were Classic and Carbon, so it really was nextstep with an OS9 skin).


Thanks for the correction, man has it been a long, long time. I had the Public Beta and than got on the OS X train pretty fast on a good old B&W G3. Even with the slowness the multitasking still allowed getting around it and having all Unix right there with a big rush to initial porting was really interesting, good times. I remember calling Apple for help getting Apache compiled and got forwarded right out of the regular call system to some dev whose name I sadly forget and we worked through it.

Everything is a million times more refined and overall better now but I do have a bit of nostalgia for the community and really getting your hands dirty back then while still having a fairly decent fallback. I haven't actually needed to mess with kernel stuff since 10.5 or so but thinking back makes me wonder about paths not taken.


> so it [Rhapsody] really was nextstep with an OS9 skin

Sorry to be pedantic, but Rhapsody's user interface is modeled after the Mac OS 8 "Platinum" design language. Though 9 also was modeled on Platinum, Rhapsody's interface appears nearly identical to Mac OS 8's except for the Workspace Manager which doesn't exist in 8.


Rhapsody was a fairly ugly and distorted copy of the Platinum theme if we’re honest.


Ok, but Rhapsody looks almost exactly like Mac OS 8.


There was an article talking abut histories of Mac OS X and BSDs.

It has been over a decade, so I'm really not sure how much is left ATM.


Linux does that now. It didn't in the past (something like 2008), and I recall many people arguing about performance or similar at that time :D


I like that. Fsync() was designed with the block cache in mind. IMO how the underlying hardware handles durability is its own business. I think a hack to issue a “full fsync” when battery is below some threshold is a good compromise.


It's important to read the entire document including the notes, which informs the reader of a pretty clear intent (emphasis mine):

> The fsync() function is intended to force a physical write of data from the buffer cache, and to assure that after a system crash or other failure that all data up to the time of the fsync() call is recorded on the disk.

This seems consistent with user expectations - fsync() completion should mean data is fully recorded and therefore power-cycle- or crash-safe.


You are quoting the non-normative informative part. If _POSIX_SYNCHRONIZED_IO is not defined, your fsync can literally be this and still be compliant:

    int fsync(int) {}
Quick Google search (maybe someone with a MBP can confirm) says that macOS doesn't purport to implement SIO.


That particular implementation seems inconsistent with the following requirement:

> The fsync() function shall request that all data for the open file descriptor named by fildes is to be transferred to the storage device associated with the file described by fildes.

If I wrote that requirement in a classroom programming assignment and you presented me with that code, you'd get a failing grade. Similarly, if I were a product manager and put that in the spec and you submitted the above code, it wouldn't be merged.

> You are quoting the non-normative informative part

Indeed, I am! It is important. Context matters, both in law and in programming. As a legal analogy, if you study Supreme Court rulings, you will find that in addition to examining the text of legislation or regulatory rules, the court frequently looks to legislative history, including Congressional findings and statements by regulators and legislators in order to figure out how to best interpret the law - especially when the text is ambiguous.


> If I wrote that requirement in a classroom programming assignment and you presented me with that code, you'd get a failing grade.

It's a good thing operating systems aren't made up entirely of classroom programming assignments.

Picture an OS which always runs on fully-synchronized storage (perhaps a custom Linux or BSD or QNX kernel). If there's no write cache and all writes are synchronous, then fsync() doesn't need to do anything at all; therefore `int fsync(int) {return 0}` is valid because fsync()'s method is implementation-specific.

This allows you to have no software or hardware write cache and not implement fsync() and still be POSIX-compliant.

> Context matters, both in law and in programming. As a legal analogy, if you study Supreme Court rulings, you will find that in addition to examining the text of legislation or regulatory rules, the court frequently looks to legislative history, including Congressional findings and statements by regulators and legislators in order to figure out how to best interpret the law - especially when the text is ambiguous.

The POSIX specification is not a court of law, and the context is pretty clear: fsync() should do whatever it needs to do to request that pending writes are written to the storage device. In some valid cases, that could be nothing.


> Picture an OS which always runs on fully-synchronized storage (perhaps a custom Linux or BSD or QNX kernel). If there's no write cache and all writes are synchronous, then fsync() doesn't need to do anything at all; therefore `int fsync(int) {return 0}` is valid because fsync()'s method is implementation-specific.

Sure, I'll give you that, in a corner case where all writes are synchronized to storage before completing. However, most modern computers cache writes for performance, and the speed/security tradeoff is the context of this discussion. We wouldn't be having this debate in the first place if computers and storage devices didn't cache writes.

> The POSIX specification is not a court of law

Indeed, it isn't; nor is legislative text (the closest analogy in law). Hence the need for interpretation.

> fsync() should do whatever it needs to do to request that pending writes are written to the storage device

We are in violent agreement about this :-)


The wording here is quite subtle. Without SIO, fsync is merely a request, returning an error if one occurred. As the informative section points out, this means that the request may be ignored, which is not an error.

> If _POSIX_SYNCHRONIZED_IO is not defined, the wording relies heavily on the conformance document to tell the user what can be expected from the system. It is explicitly intended that a null implementation is permitted.

Compare this to e.g. the wording for write(2):

> The write() function shall attempt to write nbyte bytes from the buffer pointed to by buf to the file associated with the open file descriptor, fildes. [yadadada]

This actually specifies that an action needs to be performed. fsync(2) sans SIO is merely a request form that the OS can respond to or not. And because macOS does not define SIO, you have to go out and find out what that particular implementation is actually doing and the answer is: essentially nothing for fsync.


It makes sense that a null implementation is permitted to cover cases such as the one illustrated above where all writes are always synchronized. However, it violates the spirit of the law (so to speak) as discussed in the normative section to have a null implementation where writes are not always synchronized (i.e., cached). As another commenter noted, the wording was not intended to give the implementor a get-out-of-jail-free card ("it was merely a request; I didn't actually have to even try to fulfill it").


There’s also the very likely possibility that the storage is lying to the OS, that the data that was accepted and which is in the buffer has been written somewhere durable while it’s actually waiting for an erase to finish or a head to get wherever it needs to be. There are disk controllers with batteries precisely for those situations.

And, if cheating will give better numbers on benchmarks, I’m willing to bet money most manufacturers will cheat.


Since crashes and power failures are out of scope for POSIX, even F_FULLSYNC's behavior description would of necessity be informative rather than normative.

But, the reality is that all operating systems provide some way to make writes to persistent storage complete, and to wait for them. All of them. It doesn't matter what POSIX says, or that it leaves crashes and power failure out of scope.

POSIX's model is not a get-out-of-jail-free card for actual operating systems.


At least it is also implemented by windows, which cause apt-get in hyperv vm slower

And also unbearable slow for loopback device backed docker container in the vm due to double layer of cache. I just add eat-my-data happily because you can't save a half finished docker image anyway.


> Also note that 3rd party drives are known to ignore F_FULLFSYNC

SQLite, MySQL et al. [1] fall back to `fsync()` if F_FULLFSYNC fails, in order to cover this case of 3rd party or external drives.

[1] https://twitter.com/TigerBeetleDB/status/1422855270716293123


OSX defines _POSIX_SYNCHRONIZED_IO though, doesn't it? I don't have one at hand but IIRC it did.

At least the OSX man page admits to the detail.

The rationale in the POSIX document for a null implementation seems reasonable (or at least plausible), but it does not really seem to apply to general OSX systems at all. So even if they didn't define _POSIX_SYNCHRONIZED_IO it would be against the spirit of the specification.

I'm actually curious why they made fsync do anything at all though.


> OSX defines _POSIX_SYNCHRONIZED_IO though, doesn't it?

Nope: https://opensource.apple.com/source/Libc/Libc-1439.40.11/inc...


> #define _POSIX_SYNCHRONIZED_IO (-1) /* [SIO] */


ok - its "defined" as not supported. Im not sure i understand your point.


Oh sorry you're right... Too much C, not enough POSIX.

Okay, so OSX is right by the letter of the standard. Not by the spirit though, when you look at the rationale for allowing the exception.


No problem - sorry if i came off harsh, i thought you were being pedantic :D

TBH, im not so sure its that different. Scanning through the linux docs it seems that this behaviour can be configured as part of mount options (e.g. barrier on ext4). At least its explicit on macOS (with compliant hardware).


> No problem - sorry if i came off harsh, i thought you were being pedantic :D

No just I did a ctrl+F ctrl+C ctrl+V without thinking enough. No need to apologize though, my reply was actually flippant I should have been more respectful of your (correct) point.

> TBH, im not so sure its that different. Scanning through the linux docs it seems that this behaviour can be configured as part of mount options (e.g. barrier on ext4). At least its explicit on macOS (with compliant hardware).

I disagree (unless Linux short-cuts this by default). The reason is in the POSIX rationale:

*RATIONALE*

> The fsync() function is intended to force a physical write of data from the buffer cache, and to assure that after a system crash or other failure that all data up to the time of the fsync() call is recorded on the disk. Since the concepts of "buffer cache", "system crash", "physical write", and "non-volatile storage" are not defined here, the wording has to be more abstract.

The first paragraph gives the intention of the interface. It's clearly to persist data.

> If _POSIX_SYNCHRONIZED_IO is not defined, the wording relies heavily on the conformance document to tell the user what can be expected from the system. It is explicitly intended that a null implementation is permitted. This could be valid in the case where the system cannot assure non-volatile storage under any circumstances or when the system is highly fault-tolerant and the functionality is not required. In the middle ground between these extremes, fsync() might or might not actually cause data to be written where it is safe from a power failure. The conformance document should identify at least that one configuration exists (and how to obtain that configuration) where this can be assured for at least some files that the user can select to use for critical data. It is not intended that an exhaustive list is required, but rather sufficient information is provided so that if critical data needs to be saved, the user can determine how the system is to be configured to allow the data to be written to non-volatile storage.

Now this gives a rationale for why you might not include it. And lists three examples of where it could be valid to water down the intended semantics. The system can not support it; the functionality is not required because data durability is guaranteed in other ways; the functionality is traded off in cases where major risks have been reduced.

OSX on a consumer Mac doesn't fit those cases.

Linux with the option is violating POSIX even by the letter because presumably mounting the drive with -onobarrier does not cause all your applications to be recompiled with the property undefined. But it's not that unreasonable an option, it's clearly not feasible to have two sets of all your software compiled and select one or the other depending on whether your UPS is operational or not.


Oh yeah, I definitely agree with you on this. If anything you should be able to pass in flags to reduce resiliency - not have the default be that way. Maybe thats how the actual SIO spec reads (i havent read it).


OP appears to be giving useful information about OSX, regardless of what other OSes do.


The implication (in fact no, it's explicitly stated) is that this fsync() behaviour on OSX will be a surprise for developers working on cross platform code or coming from other OS's and will catch them out.

However if in fact it's quite common for other OS's to exhibit the same or similar behaviour (BSD for example does this too, which makes sense as OSX has a lot of BSD lineage), that argument of least surprise falls a bit flat.

That's not to say this is good behaviour, I think Linux does this right, the real issue is the appalling performance for flushing writes.


The POSIX specification requires data to be on stable storage following fsync. Anything less is broken behavior.

An fsync that does not require the completion of an IO barrier before returning is inherently broken. This would be REQ_PREFLUSH inside Linux.


> If _POSIX_SYNCHRONIZED_IO is not defined, the wording relies heavily on the conformance document to tell the user what can be expected from the system.

> fsync() might or might not actually cause data to be written where it is safe from a power failure.


How are you reading POSIX as "saying no"??

From that page:

  The fsync() function shall request that all data for
  the open file descriptor named by fildes is to be
  transferred to the storage device associated with the
  file described by fildes. The nature of the transfer
  is implementation-defined. The fsync() function shall
  not return until the system has completed that action
  or until an error is detected.
then:

  The fsync() function is intended to force a physical
  write of data from the buffer cache, and to assure
  that after a system crash or other failure that all
  data up to the time of the fsync() call is recorded
  on the disk. Since the concepts of "buffer cache",
  "system crash", "physical write", and "non-volatile
  storage" are not defined here, the wording has to be
  more abstract.
The only reason to doubt the clarity of the above is that POSIX does not consider crashes and power failures to be in scope. It says so right in the quoted text.

Crashes and power failures are just not part of the POSIX worldview, so in POSIX there can be no need for sync(2) or fsync(2), or fcntl(2) w/ F_FULLFSYNC! Why even bother having those system calls? Why even bother having the spec refer to the concept at all?

Well, the reality is that some allowance must be made for crashes and power failures, and that includes some mechanism for flushing caches all the way to persistent storage. POSIX is a standard that some real-life operating systems aim to meet, but those operating systems have to deal with crashes and power failures because those things happen in real life, and because their users want the operating systems to handle those events as gracefully as possible. Some data loss is always inescapable, but data corruption would be very bad, which is why filesystems and applications try to do things like write-ahead logging and so on.

That is why sync(2), fsync(2), fdatasync(2), and F_FULLFSYNC exist. It's why they [well, some of them] existed in Unix, it's why they still exist in Unix derivatives, it's why they exist in Unix-alike systems, it's why they exist in Windows and other not-remotely-POSIX operating systems, and it's why they exist in POSIX.

If they must exist in POSIX, then we should read the quoted and linked page, and it is pretty clear: "transferred to the storage device" and "intended to force a physical write" can only mean... what that says.

It would be fairly outrageous for an operating system to say that since crashes and power failures are outside the scope of POSIX, the operating system will not provide any way to save data persistently other than to shut down!


> transferred to the storage device

MacOS does that.

> the fsync() function is intended to force a physical write of data from the buffer cache

If they define _POSIX_SYNCHRONIZED_IO, which they dont.

fsync wasnt defined as requiring a flush until version 5 of the spec. It was implemented in BSDs loooong before then. Apple introduced F_FULLFSYNC prior to fsync having that new definition.

I dont disagree with you, but it is what it is. History is a thing. Legacy support is a thing. Apple likely didnt want to change peoples expectations of the behaviour on OSX - they have their own implementation after all (which is well documented, lots of portable software and libs actively uses it, and its built in to the higher level APIs that Mac devs consume).


> > transferred to the storage device

> MacOS does that.

Depends on the definition of "storage device", I guess. If it's physical media, then OS X doesn't. If it's the controller, then OS X does. But since the intent is to have the data reach persistent storage, it has to be the physical media.

My guess is that since people know all of this, they'll just keep working around it as they already do. Newbies to OS X development will get bitten unless they know what to look for.


Do you mean on Linux that calling fsync might not actually flush to the drive?


How many hundreds of millions of people use OSX over the years and never encountered any problems whatsoever?

This article is a non-issue, people just like to upvote Apple bashing.


If you need to run software/servers with any kind of data consistency/reliability on OS X this is definitely something you should be aware of and will be a footgun if you're used to Linux.

Macs in datacentres are becoming increasingly common for CI, MDM, etc.


I’d rather solve for redundant power than worry about this. It’s really only critical if you’re running a database. Who runs a database on macOS?


Every single iOS app using Core Data (which runs SQLite under the hood)


People doing CI? Or MDM?


Not sure how either of those would be critically impacted by a 1-2 second data loss in a power failure.


I believe it’s at least 5s. Marcan didn’t specify how long it was, but gave an example of at least 5s. That could cause a device to think it’s allowed to do something via MDM but not actually have a record in the database allowing it to do so.


You don't just lose seconds of data. When you drop seconds of writes, that can effectively corrupt any data or metadata that was touched during that time period. Which means you can lose data that had been safe.


> CI

Perhaps.

> MDM

I’m sure my IT department’s spyware will recover just fine.


The OS itself contains hundreds of databases.


And with those hundreds of databases, we’re only learning about this behavior now instead of any of the previous decades where abundant errors would have caused a conversation.

Doesn’t seem like an issue worthy of hundreds of HN comments and upvotes, just people raising a stink over a non-issue.


If in a hundred times someone loses power, their system is corrupted once, it’s a choice whether you accept that or not. I do not. I want quality and quality is an system that does not get corrupted.


I've used Macs for years and never been aware of it.

Note: the tweeter couldn't provoke actual problems under any sort of normal usage. To make data loss show up he had to use weird USB hacks. If you know you have a battery and can forcibly shut down the machine 'cleanly' it's not really clear what the need for a hard fsync is.

"Macs in datacentres are becoming increasingly common for CI, MDM, etc."

CI machines are the definition of disposable data. Nobody is running Oracle on macOS and Apple don't care about that market.


These days, best practice for data consistency / reliability in that environment, IIUC, is to write to multiple redundant shards and checksum, not to assume any particular field-pattern spat at the hard drive will make for a reliability guarantee.


"never encountered any problems whatsoever?"

And how do you know they didn't, did you do a poll?

How many people had random files dissapear or get corrupted or settings get reset and probzbly thought they must have done something wrong?


Fantastic thread.

The history is also interesting. It's not that "macOS cheats", but that it sincerely inherited the status quo of many years, then tried to go further by adding F_FULLFSYNC. However, Linux since got better, leaving macOS stuck in the past and everybody surprised. It's a big problem.

Here's Dominic Giampaolo from Apple discussing this back in 2005, before Linux fixed fsync() to flush past the disk cache: https://lists.apple.com/archives/darwin-dev/2005/Feb/msg0008...

And here's TigerBeetle's Twitter thread with more of the history and how projects like LevelDB, SQLite and various language std libs were also affected: https://twitter.com/TigerBeetleDB/status/1422854779009654785


Docs [1] suggests that even F_FULLFSYNC might not be enough. Quote:

> Note that F_FULLFSYNC represents a best-effort guarantee that iOS writes data to the disk, but data can still be lost in the case of sudden power loss.

[1] https://developer.apple.com/documentation/xcode/reducing-dis...


When building databases, we care about durability, so database authors are usually well aware that you _have_ to use `F_FULLSYNC` for safety. The fact that `F_FULLSYNC` isn't safe means that you cannot write a transactional database on Mac, it is also a surprise to me.

Note that the man page for `F_FULLSYNC` itself doesn't mention that it is not reliable: https://developer.apple.com/library/archive/documentation/Sy...

Having a separate syscall is annoying, but workable. Having a scenario where we call flush and cannot ensure that this is the case is BAD. Note that handling flush failures is expected, but all databases require that flushing successfully will make the data durable.

Without that, there are no way to ensure durable writes and you might get data loss or data corruption.


I checked a few and they seem to do F_FULLFSYNC (sic), except MySQL, they deleted it to make it run faster:

https://github.com/mysql/mysql-server/commit/3cb16e9c3879d17...


Oh MySQL, I’m a world turned upside down you are my North Star.


Wow. Could this explain why we have a lot of problems with MySQL running on Mac OS with the databases randomly getting totally corrupted and basically needing to be restored from backup each time?

At first glance, it seems to make sense - if someone shuts down while there is still uncommitted data because MySQL has tried a fsync(), it could leave the files on disk in a weird state when the power is cut. Am I missing something?


"the possible durability gain is slim to none. This also makes OS X behave similar to other platforms."

You didn't report the full reasoning.


Maybe that's right, maybe it's not - impossible to tell from the snippet. I'm deeply suspicious when they start citing performance numbers on what is a ultimately an ordering change though.


> Without that, there are no way to ensure durable writes and you might get data loss or data corruption.

The best the OS can do is to trust the device that the data was, indeed, written to durable storage. Unfortunately, many devices lie about that. If you do a `F_FULLSYNC`, you can say you did your best, but the data is out of your hands now.


You can always reset the device and read back the data to confirm.

Sure, that will be slow, but there is a way!


Not sure. They can still cheat. You'd need to power them down, then back up again. If it's a soft reset, they can just read it from RAM.


True.


> When building databases, we care about durability, so database authors are usually well aware that you _have_ to use `F_FULLSYNC` for safety. The fact that `F_FULLSYNC` isn't safe means that you cannot write a transactional database on Mac, it is also a surprise to me.

> Without that, there are no way to ensure durable writes and you might get data loss or data corruption.

No, not without that. Even with that, you can't have durable writes; Not on a mac, or linux or anywhere else, if you are worried about fsync()/fcntl+F_FULLSYNC because they do nothing to protect against hardware failure: The only thing that does is shipping the data someplace else (and depending on the criticality of the data, possibly quite far).

As soon as you have two database servers, you're in a much better shape, and many databases like to try and use fsync() as a barrier to that replication, but this is a waste of time because your chances of a single hardware failure remain the same -- the only thing that really matters is that 1/2 is smaller than 1/1.

So okay, maybe you're not trying to protect against all hardware failure, or even just the flash failure (it will fail when it fails! better to have two nvme boards than one!) but maybe just some failure -- like a power failure, but guess what: We just need to put a big beefy capacitor on the board, or a battery someplace to protect against that. We don't need to write the flash blocks and read them back before returning from fsync() to get reliability because that's not the failure you're trying to protect against.

What does fsync() actually protect against? Well, sometimes that battery fails, or that capacitor blows: The hardware needed to write data to a spinning platter of metal and rust used to have a lot more failure points than today's solid state, and in those days, maybe it made some sense to add a system call instead of adding more hardware, but modern systems aren't like that: It is almost always cheaper in the long run to just buy two than to try and squeeze a little more edge out of one, but maybe, if there's a case where fsync() helps today, it's a situation where that isn't true -- but even that is a long way from you need fsync() to have durable writes and avoid data loss or corruption.


> No, not without that. Even with that, you can't have durable writes; Not on a mac, or linux or anywhere else, if you are worried about fsync()/fcntl+F_FULLSYNC because they do nothing to protect against hardware failure: The only thing that does is shipping the data someplace else (and depending on the criticality of the data, possibly quite far).

"The sun might explode so nothing guarantees integrity", come on, get real. This is pointless nitpicking.

Of course fsync ensures durable writes on systems like Linux with drives that honor FUA. The reliability of the device and stack in question is implied in this and anybody who talks about data integrity understands that. This is how you can calculate and manage error rates of your system.


> "The sun might explode so nothing guarantees integrity", come on, get real. This is pointless nitpicking.

I think most people understand that there is a huge difference between the sun exploding and a single hardware failure.

If you really don't understand that, I have no idea what to say.

> Of course fsync ensures durable writes on systems like Linux with drives that honor FUA

No it does not. The drive can still fail after you write() and nobody will care how often you called fsync(). The only thing that can help is writing it more than once.


What is the difference in the context of your comment? The likelihood of the risk, and nothing else. So what is the exact magic amount of risk that makes one thing durable and another not, and who made you the arbiter of this?

> No it does not. The drive can still fail after you write() and nobody will care how often you called fsync(). The only thing that can help is writing it more than once.

It does to anybody who actually understands these definitions. It is durable according to the design (i.e., UBER rates) of your system. That's what it means, that's always what it meant. If you really don't understand that, I have no idea what to say.

> The only thing that can help is writing it more than once.

This just shows a fundamental misunderstanding. You achieve a desired uncorrected error rate by looking at the risks and designing parts and redundancy and error correction appropriately. The reliability of one drive/system might be greater than two less reliable ones, so "writing it more than once" is not only not the only thing that can help, it doesn't necessarily achieve the required durability.


> What is the difference in the context of your comment? The likelihood of the risk, and nothing else. So what is the exact magic amount of risk that makes one thing durable and another not, and who made you the arbiter of this?

What's the difference between the sun exploding and a single machine failing?

I have no idea how to answer that. Maybe it's because many people have seen a single machine fail, but nobody has seen the sun explode? I guess I've never had a need to give it more thought than that.

> It does to anybody who actually understands these definitions. It is durable according to the design (i.e., UBER rates) of your system.

You are wrong about that: Nobody cares if something is "designed to be durable according to the definition in the design". That's just more weasel words. They care what are the risks, how you actually protect against them, and what it costs to do. That's it.


I was asking about the context of the conversation. And I answered it for you. It's the likelihood of the risk. Two computers in two different locations can and do fail.

> You are wrong about that: Nobody cares if something is "designed to be durable according to the definition in the design".

No I'm not, that's what the word means and that's how it's used. That's how it's defined in operating systems, that's how it's defined by disk manufacturers, that's how it's used by people who write databases.

> That's just more weasel words.

No it's not, its the only sane definition because all hardware and software is different, and so is everybody's appetite for risk and cost. And you don't know what any of those things are in any situation.

> They care what are the risks, how you actually protect against them, and what it costs to do. That's it.

You seem to be arguing against yourself here. Lots of people (e.g., personal users) store a lot of their data on a single device for significant periods of time, because that's reasonably durable for their use.


There is a point at which a redundant array of inexpensive and unreliable replicas is more durable than a single drive. Even N in-memory databases spread across the world is more durable than a single one with fsync.

Unfortunately few databases besides maybe blockchains have been engineered with that in mind.


> There is a point at which a redundant array of inexpensive and unreliable replicas is more durable than a single drive. Even N in-memory databases spread across the world is more durable than a single one with fsync.

Unless a failure mode you are concerned about include being cut off from the internet, or your system isn't network connected in the first place, in which case maybe not eh?

Anyway surely the point is clear. "Durable" doesn't mean "durable according to the whims of some anonymous denizen of the other side of the internet who is imagining a scenario which is completely irrelevant to what I'm actually doing with my data".

It means that the data is flushed to what your system considers to be durable storage.

Also hardware failures and software bugs can exist. You can talk about durable storage without being some kind of cosmic-ray-denier or anti-backup cultist.


Say you have mirrored devices. Or RAID-5, whatever. Say the devices don't lie about flushing caches. And you fsync(), and then power fails, and on the way back up you find data loss or worse, data corruption. The devices didn't fail. The OS did.

One need not even assume no device failure, since that's the point of RAID: to make up for some not-insignificant device failure rate. We need only assume that not too many devices fail at the same time. A pretty reasonable assumption. One relied upon all over the world, across many data centers.


This is not about hardware failure but OS crashes and bugs that much more frequent.


If the OS has bugs that will make it crash, what makes you think those bugs aren’t going to affect fsync()?


"but guess what: We just need to put a big beefy capacitor on the board, or a battery someplace to protect against that. We don't need to write the flash blocks and read them back before returning from fsync() to get reliability"

I believe drives that do have capacitors are aware of it and return immediately from fsync() without writing to flash. Thats the point of this API

Since neither Macs nor any other laptops have SSDs with capacitors, this point is kind of moot.


Erm. They absolutely do. Most laptops have batteries as well— including all of the ones that Apple makes.


I have at various points replaced or upgraded 15 NVME SSD's in desktops and laptops, and I have not seen a single one - could you please let me know where I can find a non-server SSD with capacitors that are large enough for it to flush data in case of a sudden power loss?

Laptop batteries are irrelevant - battery failure, freezin or cutting power to the curcuitbord by holding the off buttons are the failrue modes you have to protect against.


"Silly wabbit, database trix are for servers!"


> The fact that `F_FULLSYNC` isn't safe means that you cannot write a transactional database on Mac, it is also a surprise to me.

Yeah you can definitely write a transactional database without having to rely on knowing you've flushed data to disk. Not only can you, but you surely have to otherwise you risk data corruption e.g. when there's a power-cut mid-write.


The whole point of transactional flush to disk is that you get confirmation that data is now safe from power loss. You don't get any guarantee because you _called_ flush. The guarantee comes from flush returning.


Lol, but hey, macs are not servers, so "hahah who cares!".


In Apple defense, sloppy fsync behaviour is clearly documented: https://developer.apple.com/library/archive/documentation/Sy...


That's not defence. It fails the principle of least-surprise. If everyone's experience is that fsync is flushing then why would somebody think to look up the docs for Mac in case they do it differently?


>That's not defence. It fails the principle of least-surprise.

Only if the standard where anything else is a "surprise" is 2022 Linux.

Many (all?) other unices and macOS itself since forever work like that. Including Linux itself in the past [1]

[1] https://lwn.net/Articles/270891/


Drive caches also used to not exist in the past. At that point, behavior was the same as it is on Linux today. It then regressed when drive caches became a thing.

Maybe it not being added to OSes when drive caches came into the picture was arguably a bug, and Linux has been the first OS to fix it properly. macOS instead introduced new, non-buggy behavior, and left the buggy one behind :-)


> Drive caches also used to not exist in the past. At that point, behavior was the same as it is on Linux today. It then regressed when drive caches became a thing.

You mean in the 1980s? Linux wasn’t used before this wasn’t a concern for sysadmins and DBAs. This concern has been raised for years - back in the PowerPC era the numbers were lower but you had the same arguments about whether Apple had made the right trade-offs, or Linux or Solaris, etc.

Given the extreme rarity of filesystem corruption being a problem these days, one might conclude that the engineers who made the assumption that batteries covered laptop users and anyone who cares about this will be using clustering / UPS were correct.


The minute storage manufacturers introduced drive caches is the minute this bug became the responsibility of storage manufacturers. IMO it’s not the kernel’s responsibility.


Now Apple is the primary storage manufacturer for Mac.


Any references for Unices traditionally skipping FUA or synchronize cache to the storage stack? Sounds surprising to me. Here's Solaris for example: https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSDiskWrit...

Also re Linux here's eg PostgreSQL 9.0 documentation saying ext4/zfs + scsi used the "SYNCHRONIZE CACHE" command with fsync even back then, and a equivalent SATA command being used by the storage stack with SATA-6 and later drives: https://www.postgresql.org/docs/9.0/wal-reliability.html


On ZFS on Solaris / Illumos you get a choice of whether fsync() acts as a write barrier or actually waits for writes to complete.


They do it according to POSIX spec. Linux is the oddball here.


The POSIX spec is deliberately ambiguous about this.


So? Did Linux do it like that before?

https://lwn.net/Articles/270891/


It does it like that now, which is what I'd expect if I'm writing software.


I’d argue more people develop on non-Linux devices such as Windows and MacOS on linux itself.


F_FULLFSYNC is nonstandard. As far as I know there is no standard-complicant way to get data on to stable storage on macOS. That's a bit of a problem. It makes a lot more sense to make the standard-compliant way actually sane.


F_FULLFSYNC is the equivalent to this:

https://man7.org/linux/man-pages/man2/sync.2.html


I have said a few times already - F_BARRIERFSYNC. This is likely equivalent to what linux is doing.

edit: sorry - not 'standards compliant' (whatever that is - does linux declare support for SIO?), but probably what you are looking for.


It isn't. I already replied to you above. A barrier does not guarantee data durability and we already know Linux fsync() == macOS F_FULLFSYNC because they have the same (lack of) performance on the same hardware.


Thanks marcan. Apologies for wasting your time.


I just looked into this, since what you say and what Apple’s documentation says are two different things.

Here is Apple’s documentation:

https://devstreaming-cdn.apple.com/videos/wwdc/2019/419ef9ip...

F_BARRIERFSYNC: fsync() with a barrier

F_FULLFSYNC: Drive flush its cache to disk

This sounds like the Linux fsync() and Linux syncfs() respectively. What you say is that F_FULLFSYNC is the same as Linux fsync() and your performance numbers back that up. Unfortunately, you would only see a difference between Linux fsync() and Linux syncfs() if you have files being asynchronously written at the same time as the files that are subject to fsync()/syncfs(). fsync() would only touch the chosen files while syncfs() would touch both. If you did not have heavy background file writes and F_FULLSYNC really is equivalent to syncfs(), you would not be able to tell the difference in your tests.

That said, let’s look at how this actually works on Mac OS. Unfortunately, the apfs driver does not appear to be open source, but the HFS+ driver is. Here are the relevant pieces of code in HFS+:

https://github.com/apple-oss-distributions/hfs/blob/hfs-556....

https://github.com/apple-oss-distributions/hfs/blob/5e3008b6...

First, let me start with saying this merits a faceplam. The fsync() operation is operating at the level of the mount point, not the individual file. F_FULLSYNC and F_BARRIERFSYNC are different, but they both might as well be variants of the Linux syncfs().

For good measure, let us look at how this is done on the MacOS ZFS driver:

https://github.com/openzfsonosx/zfs/blob/master/module/zfs/z...

The file is properly synced independently of the mountpoint, such that other files being modified in the file are not immediately required to be written out to disk. That said, both F_FULLSYNC and F_BARRIERFSYNC on MacOS are mapped by the ZFS driver to the same function that implements fsync() on Linux:

https://github.com/openzfs/zfs/blob/master/module/os/linux/z...

For good measure, let us look at how syncfs() is implemented by ZFS on Linux:

https://github.com/openzfs/zfs/blob/master/module/os/linux/z...

It operates on the superblock, which is what MacOS’ HFS+ driver does.

From this, I can conclude:

Linux syncfs() == macOS F_FULLFSYNC on HFS+

Linux fsync() == macOS fsync()/F_FULLFSYNC/F_BARRIERFSYNC on ZFS

Also, MacOS F_BARRIERSYNC is a weakened Linux syncfs() and Apple’s documentation is very misleading (although maybe not technically wrong). POSIX does allow fsync to be implemented via syncfs (sync in POSIX, but I am saying syncfs from Linux to be less confusing). However, not issuing and waiting for the completion of an IO barrier on fsync is broken behavior like you claim.

I am not sure how MacOS APFS behaves. I imagine that additional testing that takes into account the nuances in semantics would be able to clarify that. If it behaves like HFS+, it is broken.

Edit: Upon further examination and comparing notes with the MacOS ZFS driver team lead, it seems that HFS+ is syncing more than requested when F_FULLFSYNC is used, but less than the entire filesystem. You are fine treating it as a Linux fsync. It is close enough.


> That's not defence. It fails the principle of least-surprise.

Welcome to C APIs in general, and POSIX in particular.


> why would somebody think to look up the docs

It seems reckless to me to not do this when you're interacting with the filesystem using low-level APIs (i.e not via Swift/Obj-C).


Linux only stopped doing the clearly wrong thing in 2008 or so iirc.

It is still dumb that there's a definition of fsync() that does not sync :-/


I’d argue maybe .5% of people are working on something where this is even close to being a concern. Those people probably know what they need to use.

Apple doesn’t need to defend anything.


I am sick of this callous and capricious disrespect for users and their data, rampant throughtout this wanky industry.

Do lawyers use Apple computers? Do they work on important documents relating to life and death?

Some people have literally been executed because developers couldn't do their job properly. People have been sent to jail for decades because developers fucked up in the british postmaster scandal.

Average people life in a dangerous world- work with documents about their financial wellbeing. They live in opressive countries where being gay is punishable by death. They drive 2 ton death machines. And now that we have put computers in places where life and limb depends on them, we are responsible for doing the job properly, that's why we get paid.


As the article mentions, on laptops, this is pretty clever. On desktops though...

Perhaps real macs should be equipped with internal batteries to flush to disk in the case of power loss?

I think I heard some enterprise motherboards/controllers/computers did just that, given the upside in normal operation.


These machines are actually low-power enough that you could implement a last-gasp flush mechanism. The Mac Mini already survives 1-2 seconds without AC power (at least if idle). You could plausibly detect AC power being yanked and immediately power down all downstream USB/TB3 devices and the display (on iMacs), freeze all CPUs into idle, and have plenty enough reservoir cap to let NVMe issue a flush.

But they aren't doing that. I tested it on the Mac Mini. It loses several seconds of fsync()ed data on hard shutdown.

This does require a last-gasp indication from the PSU to the rest of the system, so if they don't have that, it's not something they could add in a firmware update.


I mean the ATX standard has this signal built in, so Apple could just copy it:

https://en.wikipedia.org/wiki/Power_good_signal


> The ATX specification requires that the power-good signal ("PWR_OK") go high no sooner than 100 ms after the power rails have stabilized, and remain high for 16 ms after loss of AC power, and fall (to less than 0.4 V) at least 1 ms before the power rails fall out of specification (to 95% of their nominal value).

I don't think that quite works for the purpose. What you'd want is a second signal that goes low as soon as possible after loss of AC power.

My reading here is that PWR_OK going low is an indication that the PSU has stopped providing good power, and the CPU must shut down immediately, or it might miscompute something due to low voltage. At this point you absolutely don't want to do any last-minute writing, you'd be risking corruption.

What you need here is an early warning signal that you can react to while the PSU is still coasting on the internal capacitors.


16ms is just longer than one AC cycle at 60Hz and less than one AC cycle at 50Hz.

I would has a guess that 16ms is the physical limit for most consumer hardware (and maybe commercial computing) to detect mains loss.

Of course there is industrial hardware that can detect quicker than this but it would add a LOT of cost for arguably little gain, or something that could be solved in another manner.


> I would has a guess that 16ms is the physical limit for most consumer hardware (and maybe commercial computing) to detect mains loss.

Doubtful. 16ms is an awfully long time these days. There's no reason why you couldn't detect power loss much sooner, given a good input signal. The concept also gets used quite often, in the form of SSRs with zero crossing detection. Those are used for dimmers.

The reason is likely related to the awful waveforms produced by some UPSes and inverters:

https://www.christidis.info/images/blog/scope_20.png

Unlike a nice sine wave, those spend a good while hovering near zero volts, so the PSU has to be able to tolerate that. Detecting loss of power sooner in this case isn't a question of cost, it's a question of that you don't have a good signal to do the detection on to start with.


> Unlike a nice sine wave, those spend a good while hovering near zero volts, so the PSU has to be able to tolerate that.

That wave chart was atrocious. I wonder if the extra load on the DC-side caps leads to them having lower life expectancy than the ones in a PSU attached to a proper power grid?


Power OK signals are used to prevent latch ups in silicon due to power glitches. The signals will route to power management ICs to ensure a full reset with proper bringing up of the power rails on any power glitch.


>But they aren't doing that. I tested it on the Mac Mini. It loses several seconds of fsync()ed data on hard shutdown.

That's unfortunate. My Mac Mini crashes every other night during sleep. I guess I'm going to have to shut it down to avoid any data corruption.


It should be flushing the drive cache on sleep. This is mostly an issue for sudden AC power loss.


Ah, thanks! That's good to know.


Why does it crash? Mac Minis are some of the most reliable machines on the market, in my experience. Maybe a faulty unit?


Shitty software? My 2018 Mac Mini would crash every single time going to sleep on the last version of Mojave. I'm not alone in this as there's huge threads on MacRumors and Apple's support forum about it. Apple's "fix" was to just update to Catalina which indeed fixes it but doesn't really help if you want to run 32 bit software. Wouldn't surprise me if they did something similar again.


It has started crashing the night after I upgraded to macOS 12.2.0. The latest update to 12.2.1 hasn't fixed it. I'm pretty sure it's not hardware related as I had no issues before the OS upgrade.

Edit: Here's the first line of the crash log (which I'm sending to Apple every time):

  panic(cpu 3 caller 0xfffffe0023be8be0): [data.kalloc.16]: 
  element modified after free (off:0, val:0x0000000000000030, sz:16, ptr:0xfffffe2fffc9bb00)
Looks like a use after free bug.


This is why I'm still on Catalina. I used to be in the "trail by one point release" mode on macOS, now I'm in the "trail by 2 major releases" camp.


I get regular crashes restoring from sleep on my 2014 Mac Mini (running Monterey)


Even on laptops I feel uncomfortable. My macOS freezes or kernel panics on me from time to time.


I believe the NVMe driver has a kernel panic hook; I would hope it is used to issue a flush.

OTOH, if you have watchdog timeouts (I've seen this from bad drivers), those would certainly not give the kernel a chance to do that.


What would you implement in Asahi? Would you follow Apple's approach and defer flushes, implementing a kernel panic hook and having some kind of F_FULLFSYNC or just keep Linux' current implementation?


We're probably going to have a knob to defer flushes (but still do them, unlike Apple, after a max timeout) that will be on by default on laptops, and make sure panics flush the cache if we can. Also apparently we need to do something for the power button too, as I just tested how macOS handles that. There is a warning before the system shuts down but we need to listen to it. Same with critical battery states.


Then I misunderstood. Do you mean that Apple doesn't implement ANY timeout? So they only flush when the cache is full or when a shutdown routine has started?


They flush the cache when something requests the cache be flushed; I don't know if there is a timeout, because presumably it's not difficult for some random process to issue a FULLFSYNC and flush everything prior as a side-effect (the flush is global). But I've seen at least 5-10 seconds of data loss from drive cache loss on the Mac Mini, so if they do do deferred flushes the timeout is longer than that.


WTF, that is worse than I thought then. That's the dirtiest hack I've read, it's of very low quality for a company like Apple. That I'd expect for a OnePlus device, not for a full fledged Macbook.


When do off-the-shelf NVMe controllers flush their internal DRAM buffer? I presume that happened after a timeout, even if the OS does not issue a NVMe flush command.

Does Apple implement the NVMe spec on their controller, i.e. do they indicate "Volatile Write Cache"?


Oh geez, deliberately issuing commands to storage after your kernel panics? It just keeps getting better :(


>Perhaps real macs should be equipped with internal batteries to flush to disk in the case of power loss?

Or just add a UPS?


Does disk gets flushed in case of kernel panic?


How exactly this is clever? Maybe on some toy, not on workstation!


Hmm as slow as that is, does the controller support VERIFY? Because there is FUA in verify which forces the range to flush as well, and it could be used as a range flush. Depending on how they implement the disk cache its possible that is faster than a full cache walk (which is likely what they are doing).

This is one of those things that SCSI was much better at, SYNC CACHE had a range option which could be used to flush say particular files/database tables/objects/whatever to nonvolatile storage. Of course out of the box Linux (and most other OSs) don't track their page/buffer caches closely enough to pull this off, so that fsync(fileno) is closer to sync(). So, few storage systems implemented it properly anyway.

The choice of ignoring flushes vaguely makes sense if you assume the mac's SSD is in a laptop with a battery. In theory then the disk cache is non-volatile (and this assumption is made on various enterprise storage arrays with battery backup as well, although frequently its a controller setting). But i'm guessing someone just ignored the case of the mac mini without a battery.


I assumed the barrier was doing something like that, but marcan was able to inspect the actual nvme commands issued and has confirmed thats not the case.

But that would be awesome, especially with these ever growing cache capacities.


Deferring flushes on the NVMe level could also corrupt a journaling FS itself, not just the contents of files written with proper fsync incantations.


Indeed, though that is somewhat rare. For our distro, I would opt to enable it by default on laptops (which is quite safe) and disable it on desktops.


APFS at least has metadata checksums to prevent that. However it does not do data checksums (weird decision...), despite being a CoW fs with snapshotting, similar to ZFS and btrfs.


They rely on hardware storing checksums and on protocol using checksums to prevent data corruption on all levels.


Everyone else does that as well and it's not a substitute for end-to-end data integrity.


What confuses me about this is why are they so slow with F_FULLSYNC? Since that's the equivalent of what non-Apple NVMEs do under, say, Linux, and they manage to be much faster.


The OS does not matter; it's strictly about the drive. macOS on a non-Apple SSD should be equally fast with F_FULLSYNC.

Indeed, I would very much like to know what on earth the ANS firmware is doing on flushes to make them so hideously slow. We do have the firmware blobs (for both the NVMe/ANS side and the downstream S5C NAND device controllers), so if someone is bored enough they could try to reverse engineer it... it also seems there's a bunch of debug mode options, so maybe we can even get some logs at some point.


Drives are known to ignore that hint... Thats why you should use vendor approved hardware if such things matter to you.


Variants of the FSYNC story have been going on for decades now. The framing varies, but typically somebody is benchmarking IO (often in the context of database benchmarking) and discovers a curious variance by OS.

On NVMes I wonder whether this really matters, but it's a serious issue on spinning disks: do you really need to flush everything to the disk (and interrupt more efficient access patterns)?


> On NVMes I wonder whether this really matters, but it's a serious issue on spinning disks: do you really need to flush everything to the disk (and interrupt more efficient access patterns)?

That depends on the drive having power loss protection, which comes most of the time in the form of a capacitor that powers the drive long enough to guarantee that its buffers are flushed to persistent storage.

Consumer SSDs often do not have that, so flushing is really important there, at least if your data, or no FS corruption is important to you.

Enterprise SSDs almost always have power loss protection, so there it isn't required for consistency’s sake, albeit in-flight data that didn't hit the block device yet is naturally not protected by that, most FS handle that fine by default though.

Note that Linux, for example, does by default a periodic flush every 30s independent of caching/flush settings, so that's normally the upper limit you'd lose, depending on the workload it can be still a relatively long time frame.

https://sysctl-explorer.net/vm/dirty_expire_centisecs/


Those VM tunables are about dirty OS cache, not dirty drive cache. If you fsync() a file on Linux it will be pushed to the drive and (if the drive does not have battery/capacitor-backed cache) flushed from drive cache to stable storage. If you don't fsync() then AIUI all bets are off, but in practice the drive will eventually get around to flushing your data anyway. The OS has one timeout for cache flushes and the drive should have another one, one would hope.


As you noted, Apple's fsync() behavior is defensible if PLP is assumed. Committing through the PLP cache isn't how these drives are meant to operate - hence the poor behavior of F_FULLSYNC.

But this isn't specific to Macs and iDevices. Some non-PLP drives also struggle with sync writes on FreeBSD [1]. Most enterprises running RDBMS mandate PLP for both performance and reliability. I understand why this is frustrating for porting Linux, but Apple is allowed to make strong assumptions about how their hardware interoperates.

[1] https://www.truenas.com/community/threads/slog-and-power-los...


There is no PLP. If you yank power you lose up to 5-10 seconds of disk cache (fsynced files that weren't F_FULLFSYNCed). I tested this. On macOS.


I guess we expected a marvelous interplay of hardware and software, but all we got was fudged numbers.


On my Linux (at least to my SATA drive) fsync() issues a "FLUSH_CACHE" to the drive too.


On this NVMe, flushing is slower than on some spinning disks, so it apparently matters.


Yes, I would have skipped the fsync thing, which carries a lot of baggage, and concentrate on this.

Btw, are you sure those spinning disks are actually flushing to rust? Caches all the way down... ;-)


I mean, typical seek time on rust is O(10ms) and these controllers are spending 20ms flushing a few sectors. Obviously rust would do worse if you have the cache full of random writes, though. The problem here is the huge base cost.


Think about what's going on in the controller running any page access SSD.

You have wear leveling trying to keep things from blowing holes in certain physical pages. In certain cell architectures you can only write to pages that have previously been erased. Once you do write the data to the silicon... it's not really written anyway, because the tables and data structures that map that to the virtual table the host sees on boot also have to be written.

It is entirely reasonable that a system that does 100k honest sustained write I/O per second would come to its knees if you're insistent enough to actually want a full, real, power cycle proof, sync.

To do an actual full sync, where it could come back from power off... requires flushing all of those layers. Nothing is optimized to do that. I'm amazed that it can happen 40 times per second.

It's possible that you could speed this up a bit, but somewhere there's an actual non-wear leveled single page of data that tells the drive how to remap things to be useful... I strongly suspect writing that page frequently would eat the drive life up in somewhere between 0.1 and 20 million cycles. After that point, the drive would be toast.

I agree with the other thread that actually flushing is likely to be a very, very well guarded bit of info.


This sounds like laptops are fine, but iMacs and Minis are effed.

Curious, what's the real world risk of full OS level corruption and not just data loss?


Good question. I just started up a loop doing USB-PD hard reboots on my MBA every 18 seconds (that's about one second into the desktop with autologin on, where it should still be doing stuff in the background). Let's see if it eats itself.


Famous last words


This is just a test machine I also sometimes use as a dumb terminal around the house; I'm not going to cry if the OS eats itself :P


Hopefully the ssd doesn’t either though, bricking it would be hilarious but not ideal.


Finding out if a DFU restore can recover a corrupted SSD storage would be an interesting test in and of itself!

But to be honest, if I end up really bricking a machine for science, that will be worth it for the information it gives us. Obviously I'm not trying to destroy my hardware, but I'm very grateful that I can afford it if it happens thanks to all the support I'm getting from folks for the project.


How can we get notified about your results?


Laptops are fine unless your battery has issues and you get occasional power losses, which seems to be not too uncommon for third-party batteries (which themselves are not too uncommon since Apple will charge you an arm and a leg to replace half your laptop if you have a defective battery).


Bad batteries generally allow for last-gasp handling, and I've definitely seen the SMC throw a fit on some properties a few seconds before shutdown due to the battery being really dead. Not sure if macOS handles this properly, but I'd hope it does, and if it doesn't they could certainly add the feature. It would be quite an extreme case to have a battery failure be so sudden the voltage doesn't drop slowly enough to invoke this.


A fair fraction of the bad batteries I have seen have not behaved like this. Things like immediate power failure on disconnecting AC power, or claiming to be at 30% and then dying, or denying the existence of the battery altogether (two of these have happened to me personally—one at the ripe age of four months rather than due to age—and three or four to other family members). It’s certainly more common for them to just fade fairly rapidly to zero and die there, but it’s by no means rare for them to spontaneously fall over.


We're talking different timescales here. All you need is one second or so to command the NVMe controller to flush, and killing other power consumers in the mean time would buy you more time by reducing load, possibly even giving you several minutes the way batteries work (they tend to fall over under load when defective/dead). What may visually appear as power suddenly failing isn't necessarily so at the scale of voltage threshold interrupts and PMICs.

What usually happens is battery internal resistance is too high to sustain a given power load, so once load crosses a threshold the system goes into a spiral of doom increasing current as battery voltage decreases and you end up in a shutdown. That's the "30% and suddenly 0% or a shutdown" scenario. But if you catch it before it's too late, you can just stop consuming power and let the NVMe controller flush.


The case I have in mind where it would suddenly die around 30% would happen around that point regardless of load, even asleep, after following a sufficiently typically linear discharge curve up to that point. Maybe the power management system gets a fraction of a second’s notice, I don’t know; but it wasn’t a 30% plummeting to zero over the course of ten or thirty seconds, or even a “30%; no—0%; no—dead” case, which seem to be the much more common failure modes. As for the “pull the AC power and it instantly dies” cases, I’m a layman in battery matters, with no more than high school electronics, but I’d be surprised if there’s enough in there for it to do anything—those are cases where either it literally has no battery to draw on (because it’s electronically dead), or thinks it has a battery but discovers as soon as it tries to draw on it that it effectively doesn’t actually.


If it's literally dying at 30% with no warning, it's either the battery polling being too slow (keep in mind the UI will usually only refresh once a minute or so for these things; the power management system has faster stats), or the charge estimation being way off. There's very little reason for a battery to drop from true 30% SoC to completely dead, without first going into a power draw spiral of doom which you can revert if you stop consuming as much power.


My personal experience with 3 Apple devices:

“30% to 0” and “Pull AC and it instantly dies” are typically a combination of load and device temperature. High CPU/GPU usage, high brightness, 3G/LTE usage, and cold temps and the device doesn’t have a chance.

It’s been somewhat fascinating to monitor power usage in this really crude way. TikTok on iOS, for example, uses so much power that it’s the most likely to cause the device to shut off. FB Messenger is in the top 5. Some of Apple’s background processes will also cause it, as will paging memory to disk.

There’s another bit of information that will not surprise many people on HN: high-amperage charging will cause the battery percentage to be “more wrong”. Devices will report 45% or higher and still die as if they were reporting 30%. Charging at 500mA will not only make it “more correct”, but will typically mean that a device will not suddenly die until it’s in the single digits.

This is still n=1 of course.


iOS doesn’t. A bad battery makes it think it has more time than it does, and cleanup tasks can get killed just as they start.


Does anyone here run a desktop Mac without a battery backup device?

All of my Macs are either laptops or have a hardware backup device, so unlikely a write would be lost due to power failure (unless backup device failed which could happen).


Sure.. last power failure was like 4 years ago and the one before that was also measured in multiple years.

Back when I still used a UPS down here, it was usually the UPS that died and triggered the power failure. So I stopped investing in a UPS.


Where I live the power is quite dirty, so even when power losses are measured in years I invest in line-filtering UPS’ to extend the life of my systems.

I even lost a MBP to a light flickering event with 0 power loss. Fried the charging circuit straight through the original power brick.


Wait, why are iMacs and Minis affected more? (I read the twitter thread; I'm not seeing why.)


Laptops have batteries, so an AC power failure doesn't mean they immediately crash: they just keep running on battery until the battery gets low, at which point the system cleanly hibernates.


They're dependent on external power, which can acutely fail.


not battery powered


As a laptop user I would probably opt to make the same choice as Apple here. I like the idea mentioned to allow a tunable parameter to only allow ever losing 1 second of data.

Although, I also have the seemingly rare opinion here that ECC ram doesn't really matter on a laptop or desktop.


It's not only losing a couple seconds of data. Write ordering does not work, meaning journals don't. You get a possibility of silent data corruption.


But apple could quite easily fix write ordering


NVME even allows to make queues write through, so e.g. the kernel/fs driver could have/access the drive via a safe queue that always gets written. You can also prioritize queues to lower the chances of important data to be lost, though Apple seems to be super aggressive on caching and the drives tend to keep some written data in cache for quite long intervals.


> only allow ever losing 1 second of data

For a database this means that every transaction will take a minimum of 1 second, otherwise you can't guarantee durability.


You think it's oke that restarting your PC leads to data loss or corruption? That's basically a product killer for me. I reboot my laptop everyday.


You presumably don't reboot your laptop by connecting a USB-PD gadget that issues a hard reset. A normal OS reboot is fine, that will flush the cache.

The most common situation where this would affect laptops, in my experience so far, would be a broken driver causing a kernel lockup (not a panic) which triggers a watchdog reboot. That situation wouldn't allow for an NVMe flush.


For products like the Mac Mini, which don’t have a battery, does this mean that a loss of mains power will cause data loss? Because brownouts do happen occasionally…


Yes. I've tested yanking the power and can easily see 5 seconds of data loss for data that was fsync()ed (but not full synced). I'm not sure yet if corruption due to reordering is also possible, but it seems likely.


Depends what exactly is a hard reboot. I don't reboot my laptop by issueing USB-PD command. But I do by holding the power button.


I just tested that. Holding down the power button invokes a (somewhat special) btn_rst kernel panic before it has a chance to invoke a true hardware reset, and kernel panics involve an NVMe driver hook which I'm pretty sure issues a flush. Should be safe.

At least re: this issue; it's still a bad idea because it's only safe if all software is written following data integrity and flush rules to the letter, and most software isn't. You're eventually going to run into issues on any OS by doing that, because most software doesn't get this right unless it's a database. And you're still going to lose data that's in buffer cache, I'm pretty sure that won't get flushed.


See, that's a forced shutdown, a last resort measure; it's using a sledgehammer to tap in a nail. You shouldn't do that as a habit, even if this particular optimization issue wasn't a thing.

I mean I grew up diligently turning off my PC by parking the disk and using the various operating system level shutdown procedures. Nowadays I smack the off button, but that still just triggers the OS shutdown procedure. I don't turn my Mac off as a rule, its sleep mode actually works. ish.


Care to explain why?


If they're like me: outside of a software update I only reboot when the machine is not responding, at which point hard reboot is faster and more robust. I recognize it's not ideal, but I also don't think it's reasonable for the system to ever get to a point where I should be wanting to restart to "fix" it - and I would think it is a serious bug if doing so ever corrupted the system or lost any "saved" data.


Systemd takes 2 minutes to shutdown and I never got any way to resolve that.


Linux Magic SysRq + R S E I V B key chord will immediately shut down while still properly flushing disk cache and such. A bit annoying to enter, but a handy tool to have in your toolbox.


That's not the right keys and not the right order to do that. You should not flush caches before you terminated as much processes as possible correctly. And you are rebooting at the end.

REISUB for a somewhat safe EMERGENCY reboot and O instead of B at the end for shutdown.


Oh hmm, you're right. I've always done it with the other order and never had problems. Forget where I learned it that way...


Shouldn't Mac OS issue flush on restart, as it does on sleep?


1) A normal restart doesn't have this issue, at all.

2) Why are you rebooting a laptop daily? My uptime on my MacBook Pro averages 30-60 days. There's zero reason to reboot any modern OS daily.


> There's zero reason to reboot any modern OS daily.

- I use Arch, I like to avoid accumulating too much major updates between reboots. - For a time I was facing a bug that resulted in a black screen of death after resuming sleep.


I wonder if you hit the drive hard enough, so that the cache gets filled, does the performance degrade by that same magnitude?


In my use. Yes. I didn’t realize this was the reason until I saw this thread, and now I’ve tested it. Luckily, I don’t do massive data transfers nor do I do any large data work. When I got my M1 Mac Mini, however, I did and had immediate buyer’s remorse. I thought that I/O must be terrible on this thing, and I felt cheated. After the initial stand-up, I wasn’t so angry. For most tasks, it’s faster than my old TR4 1950X.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: