conclusions: higher is better, APFS is performing poorly out the gate, lets see if this materially impacts the 4k video editing you routinely do, expect improvements with future updates over time.
This is using a beta of High Sierra from July. I'm very curious to know what the speeds are now, when High Sierra is officially out.
Also, it appears that this test was all done on that High Sierra beta - I think it would be helpful to have HFS+ numbers from Sierra (the predecessor of High Sierra). But I am very happy that this article really discusses how encryption can detract from speed. I have my mac's internal drive encrypted, and I'm always interested to see if updates will slow that encryption down or not
Funnily enough, HFS+ wasn't a particularly well-performing filesystem to begin with (if compared with say... EXT4, which is similar in terms of features).
Ext2 (closest equivalent to the first version of HFS+, which shipped in 1998 with no journaling support on Mac OS 8.1) was released in 1993.
Ext3 could be considered an equivalent jump to the first HFS+ version with journaling. Both HFS+ and Ext3 extended a previously non-journaling filesystem with journaling.
Ext4 is fully compatible with Ext3 images (and is in fact the default codebase for mounting Ext3 volumes on linux), but Ext3 is not fully compatible with all Ext4 images (but potentially compatible with some). Likewise, HFS+ tends to be backwards compatible with images, but old HFS+ is not forward compatible.
I think it's a fairer comparison than you might imagine.
Semi off topic, but ... Is There a way in high Sierra to stop finder from littering every directory with thumbnails and other files?
There was (is?) a configuration that stops it for network paths, and 10.10 used to have aseptic (a kext that redirected those) but as far as I know there was no way in 10.11 or 10.13
DeathToDSStore [2] works as well, but like Asepsis it requires you to disable SIP, and that's very much not recommended!
I wonder if its possible to hook into Spotlight real-time (with mdfind?) and just delete them right after its created. Together with disabling it on networks that should be "good enough" to not see them. Might be problematic with removeable media such as USB sticks though. If you're sync/unmounting those via command line, consider a find with xargs [3] on that media before unmounting.
I haven't looked at Blue Harvest as already suggested. If that still works (without disabling SIP) I'm curious how they do that.
Please note, DSDontWriteNetworkStores only prevents finder from writing metadata files to network filesystems, as the configuration key name suggests. It does not prevent it from writing metadata on your local filesystems, you will either need to use something like DeathToDSStore (requires disabling SIP), or remove them regularly post-creation.
Blue Harvest doesn’t stop the files from being created but it deletes them when it sees them automatically. Haven’t used it in years but the release notes mention APFS as supported.
http://zeroonetwenty.com/blueharvest/
It's hard to assess that article at the moment, because Larabel's conclusions don't always correspond to what is shown in the graphs. That means that at least some of one or the other are wrong. This is pointed out early in the comments, but there hasn't been a response yet.
I just upgraded my old iMac (Mid 2010) to High Sierra which has a custom upgraded drive (Crucial 500GB). Running encrypted (FileVault) on previous Sierra and new High Sierra.
I am not seeing any difference in terms of read and write performance according to Blackmagic Disk Speed Test.
Solid 250 MB/S read and write for both HFS+ Encrypted and APFS Encrypted.
The caveat is that I know the iMac machine is old. I am in the process of upgrading my one month old 2017 MacBook Pro with Touchbar to High Sierra and will return with those results to see if indeed APFS encrypted shows a significant slowdown on newer hardware.
dd, no, they're also setting bs=N which means they're testing the speed of /dev/zero copying to that block size as well.
They also don't know if either hfs/hfs+/apfs have anything like zfs' "oh you wrote an entire block of zeros, yeah I'm not doing that, I'll just mark the block as all zeros" optimization.
Never use dd to benchmark file i/o! At least use something like ioperf which has the ability to tell you more about what you're benchmarking. Using dd without bypassing any vm caches is... misguided at best.
OSX always had deep I/O issues with Docker. I was hopeful that the new filesystem would fix the issues there but looking at the results it looks like things are gonna be worse. Anyone got any benchmarks there?
Beat me by 13 minutes! I suppose considering the benefits of APFS that the perf decreases on writes are understandable and the reads are on par with HFS+. Not bad.
Given that MacOS is primarily a personal OS, the slight decrease in performance, that I am sure will be fixed as it APFS matures, is completely unimportant.
What is important however is that file name are finally case sensitive! (I know you could do the same with HFS+, but some applications didn’t support it, ie Adobe)
I've never liked case-sensitivity in file-systems, personally. I've always found it to be annoying with no utility that I could discern. If main.h and Main.h are truly different, I find it more useful to make the file name more descriptive than just changing letters. Would you tell me why you find it useful? Not an argument, I'd like to know what other people do with that feature.
I didn't write it, but I ran into a problem on Windows when pulling a Git repo that had INSTALL (the instructions to the user) and install (a script that ran the installation).
For tons of languages other than English case sensitivity is a more complicated affair and handling it in the file system is a dumb idea. Further since your system could interact with systems that including files created by a computer not a person you want them to be able to deal with afile and Afile properly.
>I've never liked case-sensitivity in file-systems, personally. I've always found it to be annoying with no utility that I could discern.
The utility is that the filenames are digitally unambiguous buckets of bytes and we avoid 1000000 ways things can go wrong with case-insensitivity, plus the huge performance penalties.
Sometimes catering to how the machine works makes things better than going through hoops for marginal returns and complicating every part of the stack with that technical debt.
>If main.h and Main.h are truly different, I find it more useful to make the file name more descriptive than just changing letters.
Well, main.h and Main.h are visually different, aren't they?
Filenames are not buckets of bytes. They're human-readable strings. Treating them as buckets of bytes is a great way to end up in very confusing situations, like having two files named "über.png" with no way to visually distinguish them.
>Treating them as buckets of bytes is a great way to end up in very confusing situations, like having two files named "über.png" with no way to visually distinguish them.
That's only because of a similar cluster-fuck that is unicode compound characters and normalization schemes.
One and only one unambiguous code per different visible glyph would have been too much to ask...
As an analogy take the common text file. Would you like it if the read() and write() system calls looked at the file extension and refused to write non-ASCII or non-Unicode into it?
I would not like that. The flexibility to do whatever I want with those bytes is more important than guaranteeing correct text encoding.
If someone wants that for their system, make the user software do it, not the kernel.
> Sometimes catering to how the machine works makes things better than going through hoops for marginal returns and complicating every part of the stack with that technical debt.
You might be interested in reading "the design of everyday things" by Norman.
If you are developing something for your own use, who cares. But for something that users (even technical users) are going to use, these marginal returns are what makes the difference between a pleasant or a frustrating experience. It's the kind of things that Apple users (used to?) expect in Macs.
>You might be interested in reading "the design of everyday things" by Norman.
I've read it 20+ years ago as suggested by a HCI class at the uni.
The point still stands, regardless of whether it's something for "one's own use" or not. The technical debt from such decisions like a non-case-sensitive fs and the piles upon piles of kludges required to sustain it affect everybody, including the "non technical users".
>these marginal returns are what makes the difference between a pleasant or a frustrating experience
Users can be taught a simple, unambiguous rule, and they'll never think twice about it. "Pleasant experiences" that come with performance penalties, bugs and caveats on the other hand, make them suffer more, whether they know the cause or not. At some point the Clippy was somebody's idea to "delight the users" too.
>It's the kind of things that Apple users (used to?) expect in Macs.
And the kind of thing Apple itself decided it was not worth it.
I don't really see these piles of kludges, honestly. Windows and OS X do it without much fanfare (which is great, consistency between the two major desktop OSes), and it works as users expect.
Now, if we take the example in this thread (INSTALL and install), WHY ON EARTH did the programmer think it was a great idea to do name files like that? Sure, INSTALL for a readme is a tradition in free/*nix software, but then s/he could rename the other file. It seems to me that what really affects users are these nonsensical decisions taken without ease of use and ergonomics in mind.
In 99.9% of cases (no pun intended), case sensitivity is useless and needlessly confusing, which is why you don't use variables named buffer and Buffer in the same scope (I hope), even if you can.
> And the kind of thing Apple itself decided it was not worth it.
Well, I'm still using a Mac because I think the whole experienced is way more polished and consistent than any other Unix, or Windows. And I'm sure many here agree.
Good design is having the users mental model of how a thing works and the mechanics be as close as possible so that users aren't confused. Having afile and aFile be different files is clear and unambiguous. In languages other than english case insensitive characters aren't nearly as clear.
Case insensitivity is bad design by the standards established by Norman whose book I have also read.
"We don't need any more case-sensitive computers . It is now several decades since research on human errors proved the prevalence of description errors where people easily confuse two situations that are virtually identical except for a small difference"
That's from Jakob Nielsen, of the Nielsen-Norman group, in which the Norman is that Norman we both referred to.
The context is case sensitivity in cookies in an article talking about urls. It includes no broader context although it might be suggestive of it.
File systems pose different challenges than urls. Ambiguity in file systems is wholly unacceptable and its more important to be simple and correct than keep idiots from typing afile instead of Afile.
Also it can be bad design in the sense that Norman described even if Norman himself disagrees.
The context, but not the quote, which refers to computers and not cookies. Also the example that I didn't quote is referring to text editor commands, which clearly are not cookies, so the argument seems more general.
Of course Norman and Nielsen could be wrong, even if they are renowned experts of this field. But please consider that you might be wrong, too :)
These hypothetical non-technical users will also be baffled that main.h and Main.h aren't the same file (let's leave aside the question of what these non-technical users are doing with .h files in the first place.
Let me argue this another way. Nobody regards John Smith and john smith as different names.
I was of the impression that APFS would still default to case insensitive.
And if it does change the default to case sensitive, wouldn't it still break apps like Adobe and Steam etc? I mean, if their problem is that they refer to internal files with the incorrect case, what does it matter how the FS is implemented on a low level.
I never had any problems, but maybe I am not using the same applications. Can you give some examples? I guess video editing might be one, but I cannot think of many more.
Common applications like the Finder and the Dock query the filesystem when you open a context menu, among many other things that don't intuitively seem to be filesystem-dependent.
But then there are applications which have a reasonable excuse to be hitting the filesystem, like iTunes or Photos or Mail. They absolutely redline the filesystem on a regular basis. The basic speed of the flash storage in a MacBook Pro frequently masks these problems, but the filesystem works against it.
Perhaps my use case is a bit esoteric, but lack of sparse file support in HFS+ is mildly annoying. Sometimes I would forget and accidentally create a huge one, then I'd have to wait for the OS to finish zero-padding (it's non-interruptible) or fail with ENOSPC. Having a fast SSD helps. :-)