
APFS and HFS+ Benchmarks on 2017 Macbook Pro with MacOS High Sierra - BafS
https://malcont.net/2017/07/apfs-and-hfsplus-benchmarks-on-2017-macbook-pro-with-macos-high-sierra/
======
bluedino
To make a long story short, here are the results:

    
    
      Speed in MB/s
                 HFS+     HFS+ Encrypted  APFS     APFS Encrypted
      1M WRITE   1375     1373            1372     933
      1M READ    2446     2340            2162     1304
      4K WRITE   852      797             502      378
      4K READ    2106     1486            2156     1001

~~~
ringaroundthetx
conclusions: higher is better, APFS is performing poorly out the gate, lets
see if this materially impacts the 4k video editing you routinely do, expect
improvements with future updates over time.

------
jamescostian
This is using a beta of High Sierra from July. I'm very curious to know what
the speeds are now, when High Sierra is officially out.

Also, it appears that this test was all done on that High Sierra beta - I
think it would be helpful to have HFS+ numbers from Sierra (the predecessor of
High Sierra). But I am very happy that this article really discusses how
encryption can detract from speed. I have my mac's internal drive encrypted,
and I'm always interested to see if updates will slow that encryption down or
not

~~~
matthewbauer
I would actually be surprised if they change at all. They usually would only
do bug fixes between beta and release.

~~~
Fnoord
Does the/this beta have debugging enabled (in kernel for example)? Does this
affect the performance here specifically?

------
microcolonel
Funnily enough, HFS+ wasn't a particularly well-performing filesystem to begin
with (if compared with say... EXT4, which is similar in terms of features).

~~~
achamayou
It’s considerably older than Ext4 though.

~~~
microcolonel
Ext2 (closest equivalent to the first version of HFS+, which shipped in 1998
with no journaling support on Mac OS 8.1) was released in 1993.

Ext3 could be considered an equivalent jump to the first HFS+ version with
journaling. Both HFS+ and Ext3 extended a previously non-journaling filesystem
with journaling.

Ext4 is fully compatible with Ext3 images (and is in fact the default codebase
for mounting Ext3 volumes on linux), but Ext3 is not fully compatible with all
Ext4 images (but potentially compatible with some). Likewise, HFS+ tends to be
backwards compatible with images, but old HFS+ is not forward compatible.

I think it's a fairer comparison than you might imagine.

------
beagle3
Semi off topic, but ... Is There a way in high Sierra to stop finder from
littering every directory with thumbnails and other files?

There was (is?) a configuration that stops it for network paths, and 10.10
used to have aseptic (a kext that redirected those) but as far as I know there
was no way in 10.11 or 10.13

~~~
Fnoord
Yes, that configuration still works [1].

DeathToDSStore [2] works as well, but like Asepsis it requires you to disable
SIP, and that's very much not recommended!

I wonder if its possible to hook into Spotlight real-time (with mdfind?) and
just delete them right after its created. Together with disabling it on
networks that should be "good enough" to not see them. Might be problematic
with removeable media such as USB sticks though. If you're sync/unmounting
those via command line, consider a find with xargs [3] on that media before
unmounting.

I haven't looked at Blue Harvest as already suggested. If that still works
(without disabling SIP) I'm curious how they do that.

[1] defaults write com.apple.desktopservices DSDontWriteNetworkStores true

[2]
[https://github.com/snielsen/DeathToDSStore](https://github.com/snielsen/DeathToDSStore)

[3] find /path_to_USB_media -name .DS_Store -print0 | xargs -0 rm

~~~
gumby
> I wonder if its possible to hook into Spotlight real-time (with mdfind?) and
> just delete them right after its created.

Not Spotlight but the File System Events API, which was designed for precisely
this kind of thing.

------
jhack
With High Sierra now officially out, it would be far more beneficial to do
testing now as opposed to a beta from July.

~~~
mappu
Here is another, more recent benchmark showing opposite results:
[https://www.phoronix.com/scan.php?page=news_item&px=macOS-
AP...](https://www.phoronix.com/scan.php?page=news_item&px=macOS-APFS-HFS-
Benchmarks)

~~~
BugsJustFindMe
It's hard to assess that article at the moment, because Larabel's conclusions
don't always correspond to what is shown in the graphs. That means that at
least some of one or the other are wrong. This is pointed out early in the
comments, but there hasn't been a response yet.

------
nodesocket
I just upgraded my old iMac (Mid 2010) to High Sierra which has a custom
upgraded drive (Crucial 500GB). Running encrypted (FileVault) on previous
Sierra and new High Sierra.

I am not seeing any difference in terms of read and write performance
according to Blackmagic Disk Speed Test.

Solid 250 MB/S read and write for both HFS+ Encrypted and APFS Encrypted.

The caveat is that I know the iMac machine is old. I am in the process of
upgrading my one month old 2017 MacBook Pro with Touchbar to High Sierra and
will return with those results to see if indeed APFS encrypted shows a
significant slowdown on newer hardware.

~~~
diwu1989
You're saturating your SATA2 port, the bottleneck isn't the filesystem in your
case, its the physical hardware.

~~~
nodesocket
My new MacBook Pro 2017 with Touchbar is almost done with the upgrade to High
Sierra. Will advise once that is complete.

------
khc
is any of the dd tests actually valid? I don't see any of the options used
that would bypass the cache.

~~~
mitchty
dd, no, they're also setting bs=N which means they're testing the speed of
/dev/zero copying to that block size as well.

They also don't know if either hfs/hfs+/apfs have anything like zfs' "oh you
wrote an entire block of zeros, yeah I'm not doing that, I'll just mark the
block as all zeros" optimization.

Never use dd to benchmark file i/o! At least use something like ioperf which
has the ability to tell you more about what you're benchmarking. Using dd
without bypassing any vm caches is... misguided at best.

------
intellix
OSX always had deep I/O issues with Docker. I was hopeful that the new
filesystem would fix the issues there but looking at the results it looks like
things are gonna be worse. Anyone got any benchmarks there?

~~~
thinbeige
OT: Just develop on a remote machine with Linux and 100Mbit up and down via
SSH. Then Docker flies.

------
nvahalik
Beat me by 13 minutes! I suppose considering the benefits of APFS that the
perf decreases on writes are understandable and the reads are on par with
HFS+. Not bad.

------
noncoml
Given that MacOS is primarily a personal OS, the slight decrease in
performance, that I am sure will be fixed as it APFS matures, is completely
unimportant.

What is important however is that file name are finally case sensitive! (I
know you could do the same with HFS+, but some applications didn’t support it,
ie Adobe)

~~~
burnte
I've never liked case-sensitivity in file-systems, personally. I've always
found it to be annoying with no utility that I could discern. If main.h and
Main.h are truly different, I find it more useful to make the file name more
descriptive than just changing letters. Would you tell me why you find it
useful? Not an argument, I'd like to know what other people do with that
feature.

~~~
coldtea
> _I 've never liked case-sensitivity in file-systems, personally. I've always
> found it to be annoying with no utility that I could discern._

The utility is that the filenames are digitally unambiguous buckets of bytes
and we avoid 1000000 ways things can go wrong with case-insensitivity, plus
the huge performance penalties.

Sometimes catering to how the machine works makes things better than going
through hoops for marginal returns and complicating every part of the stack
with that technical debt.

> _If main.h and Main.h are truly different, I find it more useful to make the
> file name more descriptive than just changing letters._

Well, main.h and Main.h are visually different, aren't they?

~~~
eridius
Filenames are not buckets of bytes. They're human-readable strings. Treating
them as buckets of bytes is a great way to end up in very confusing
situations, like having two files named "über.png" with no way to visually
distinguish them.

~~~
coldtea
> _Treating them as buckets of bytes is a great way to end up in very
> confusing situations, like having two files named "über.png" with no way to
> visually distinguish them._

That's only because of a similar cluster-fuck that is unicode compound
characters and normalization schemes.

One and only one unambiguous code per different visible glyph would have been
too much to ask...

~~~
michaelmrose
This situation is a non issue in real life. Computers have no issue
distinguishing them and people don't create them.

~~~
coldtea
You'd be surprised. For one, they allow certain kinds of attacks.

