
Meltdown patch reduces mkfile(8) throughput to less than 1/3 on OS X - mpweiher
http://blog.metaobject.com/2018/01/meltdown-patch-reduces-mkfile8.html
======
ghostcluster
I have a suspicion that APFS is partly to blame. When I saw benchmarks of APFS
vs. HFS+ [0, 1], it appeared that Apple had managed to transition to a
filesystem that is markedly slower at reads and writes than the 20 year old
one it was replacing.

[0] [https://malcont.net/wp-
content/uploads/2017/07/apfs_hfsplus_...](https://malcont.net/wp-
content/uploads/2017/07/apfs_hfsplus_speed_comparison_small4.png)

[1] [https://malcont.net/wp-
content/uploads/2017/07/apfs_hfsplus_...](https://malcont.net/wp-
content/uploads/2017/07/apfs_hfsplus_latency_with_scale_from_zero-1.png)

~~~
mpweiher
As I wrote in the article, I already accounted for this. The raw combined
performance degradation was 4x. APFS itself seems to be around 20%. Now that
is somewhat rough, as APFS degradation may itself by non-linear. However,
others have measured the pure APFS costs and they seem to be roughly what I
saw:

[https://www.macobserver.com/analysis/apfs-performance-
lags-h...](https://www.macobserver.com/analysis/apfs-performance-lags-hfs/)

~~~
tinus_hn
Could you please disable that annoying behavior where scrolling slightly left
or right swaps to a different article? It’s hard so imagine anyone actually
uses that and it is extremely annoying.

~~~
FreeFull
That's a problem with the Blogger platform, it might not be possible to
disable that behaviour without switching to something else entirely.

~~~
vanderZwan
Have you ever heard of the expression "How do you know someone disables
javascript on the internet through an add-on? They'll tell you"?

No? Well, anyway, one draconic way to fix this is installing something like
uMatrix or NoScript. This particular website works fine without JS.

~~~
dawnerd
Likewise you can just turn on reader mode or whatever the mobile browsers call
it. Should pull just the text out.

~~~
vanderZwan
I wonder if we couldn't just make an "open in reader mode" add-on

------
gok
This is an utterly pathological test case. mkfile makes a lot of syscalls
which do very little work. Heck you could have made the batch size 1 byte;
that would really show a slowdown!

------
davej
Can someone who is more familiar with Meltdown speculate on this? Is this a
quick fix and is it likely to be optimised over time and become less slow? Or
is this the raw trade-off and current hardware will always suffer slow-downs
to this degree. Can an OS be rearchitected in ways that would mitigate the
performance loss?

~~~
jsheard
> Can an OS be rearchitected in ways that would mitigate the performance loss?

The performance loss comes from extra overhead on syscalls, so it could be
sidestepped by allowing programs to do more work per syscall.

At its simplest that could mean adding more syscalls that perform the same
operation over an arbitrarily long list of inputs (like linux's sendmmsg) but
I would like to see kernels take inspiration from modern graphics APIs that
allow arbitrarily long lists of _arbitrary operations_ to be batched and
executed with a single syscall. GPUs had this stuff figured out years ago.

~~~
sanxiyn
Note that Red Hat patented batching syscalls...
[https://www.google.com/patents/US9038075](https://www.google.com/patents/US9038075)

~~~
bch
1) this is exciting, if not also brilliant

2) SQL transactions as prior art? If they’re successful w their patent, I sure
hope there’s a way for BSDs to try it on.

~~~
pkaye
RedHat has a patent promise that they will not enforce their patents against
any free software that make use of their patents.
[https://www.redhat.com/en/about/patent-
promise](https://www.redhat.com/en/about/patent-promise)

------
Bitcoin_McPonzi
Just as a reference point, I didn't see any measurable slowdown on mkfile
(New-Item) throughput on Windows 10 after applying the patch. I suspect this
may be more of a filesystem issue.

I also tried this:

    
    
        $f = new-object System.IO.FileStream c:\temp\test.dat Create, ReadWrite
        $f.SetLength(8GB)
        $f.Close()
    

with no speed difference on a patched and unpached system

~~~
luckydude
So I'm not a Windows expert but I've done file system work. If windows is
reasonably smart it will support files with unallocated blocks. Setting the
length to 8GB is not the same as writing 8GB in any reasonable file system.

I'd retry with a script that actually writes the data.

~~~
cmurf
NTFS and APFS support sparse files, HFS+ does not.

------
raimue
The headline is completely bogus. The author switched both the filesystem from
HFS+ to APFS and the Meltdown patch was applied. This effect cannot be
attributed to just one of them without further testing.

~~~
netgusto
I applied the patched on both my systems (iMac 2013 core i7 on Sierra under
HFS+ that was upgraded to High Sierra for the occasion, and a MacBookPro 2017
already on High Sierra under APFS), and I can confirm that _both_ my systems
are seriously and objectively slowed down, 4 days after the patch now.

From what I could observe, applications taking the most serious hit are
electron based (like VSCode, that went from smooth as silk to mildly sluggish)
and Safari.

Docker on mac has also taken a hit, although I couldn't quantify it
objectively.

~~~
jaaames
Ok so I'm not completely insane, everything felt like molasses this week and
that's why.

------
jondubois
I wonder if the inefficiency of the meltdown patches will incentivize cloud
providers to lower the price of large instances that have a high CPU core
count (relative to small instances with low core count).

At the moment, the price of instances goes up linearly relative to CPU core
count - Maybe in part because the performance overhead of virtualizing a
single 32-core machine into 16 2-core machines had been minimal. But now that
the performance overhead of virtualisation is higher (due to isolating the CPU
cores being more expensive), maybe it's more efficient to lease entire CPUs
(with all of their cores) without the patch (and associated overheads).

~~~
tedunangst
If performance is better, why would they charge less instead of more? Why
apply the discount to the more desirable product?

~~~
bobwaycott
My initial guess would be to align pricing with expected performance, which
has now degraded, right? I don’t expect it to happen, but I can see customers
like myself being unhappy with paying—just for example—for an 8-core VPS whose
performance now matches what previously was 4-core performance. So, I don’t
think anyone would expect them to charge less for a higher core count than
lower core count, but adjusting prices downward to match performance wouldn’t
be upsetting.

~~~
jondubois
Yes exactly. To clarify my point; this vulnerability only affects you if
you're sharing a physical machine/CPU with other users (isolated by a
virtualization layer)... So if you choose not to share your physical
machines/CPUs with other users then you are not exposed to that vulnerability
and ideally you also shouldn't need to get these patches and their associated
performance overheads.

Virtualization already has some known minor performance overheads but now
these patches will add even more overheads.

Can cloud providers keep pretending that 4x 2-core virtual instances are as
performant as a single physical 8-core instance?

------
ageofwant
I find it really unhelpful that these 'threads' are being hyped with little
reference to any sort of thread model, reasonable attack vector or
probability. I do not want the kernel on my Arch laptop patched and slowed
down to mitigate against issues that does not reasonably exist in the context
of a laptop user.

By all means patch my browser that runs random js. I'm not in the habit of
downloading and running untrusted binaries - those that do have far greater
problems than Meltdown or spectre already.

Of course I want it patched on AWS, and my bank's backend machines, but you
are now forcing me to continually pay, hour after hour, for insurance against
a thread that I'm happy to not worry about.

Do not steal my cycles to pay for 'security' I do not require.

~~~
megous
Not sure why you're downvoted. It's a perfectly reasonable trade off.

I'll be searching for ways to disable most of these mitigations too after the
dust settles. I rarely run untrusted code, and for that I can probably find a
way to run it securely depending on the perceived threat. A lot of what I do
on my main computer is syscall heavy, and I don't like 25-50% performance hit.

~~~
tzs
You could switch to TempleOS. :-)

Meltdown is a much smaller (but not zero [1]) security risk on TempleOS than
it is on Windows or Unix and Unix-like systems.

[1] At first one might think there is no risk, because TempleOS runs
everything at ring 0 in a single address space so anything that might be
exposed via Meltdown is already wide open. That would be true in the case of
running untrusted binaries. Where Meltdown would still affect TempleOS is in
in the case of trusted binaries being exploited via techniques such as return
oriented programming. Meltdown could make more "gadgets" available in the
binary, increasing the chances that someone could make it read something it
otherwise would not have read.

------
macrael
Is this fix the KAISER fix that is mentioned in the Meltdown paper? That
sounded like it removed the kernel from processes address space, where did it
go?

------
nodesocket
Such a blantely non-real world and sythetic benchmark.

------
saahtb
One of the reasons Node developers hate Windows is because of how stupidly
slow it is to open lots of files... This will teach them

