No? Well, anyway, one draconic way to fix this is installing something like uMatrix or NoScript. This particular website works fine without JS.
If we couldn't vote on comments asyncrhonously (or, IMHO at all), nothing of value would be lost.
In a typical aggregate workflow it’s entirely possible that it’s still a better design.
So absent a microcode update that outright fixes Meltdown, there will always be some level of slow-down for vulnerable devices. System calls now jump from user mode code to a stub kernel in "supervisor memory". The stub kernel then does a full context switch (touching %cr3 paging register and wiping a good portion of the TLB), and once the real kernel finishes, it does a full context switch back to the stub kernel. It's all terribly inefficient, and realistically it's unlikely that there will only be negligible performance impacts. It should also be noted that this "work-around" doesn't fix processor, it just makes it so that that there's nothing juicy in the supervisor memory.
You may have to learn to live with this for a while. Even if it takes Intel a month to design and validate a fix for Meltdown, prototype and mass production turn around times mean that no customer will have a processor that isn't vulnerable to Meltdown until April-June 2019.
The performance loss comes from extra overhead on syscalls, so it could be sidestepped by allowing programs to do more work per syscall.
At its simplest that could mean adding more syscalls that perform the same operation over an arbitrarily long list of inputs (like linux's sendmmsg) but I would like to see kernels take inspiration from modern graphics APIs that allow arbitrarily long lists of arbitrary operations to be batched and executed with a single syscall. GPUs had this stuff figured out years ago.
2) SQL transactions as prior art? If they’re successful w their patent, I sure hope there’s a way for BSDs to try it on.
I also tried this:
$f = new-object System.IO.FileStream c:\temp\test.dat Create, ReadWrite
I'd retry with a script that actually writes the data.
From what I could observe, applications taking the most serious hit are electron based (like VSCode, that went from smooth as silk to mildly sluggish) and Safari.
Docker on mac has also taken a hit, although I couldn't quantify it objectively.
I suppose the same applies to desktop applications. Most use a large executable with maybe a few dlls. Electron and Python applications that ship as source or bytecode will be much more affected.
And also that the raw, combined difference was 4x, not 3x?
However, it is then also assumed that this 20% performance penalty between HFS+ and APFS is going to be flat no matter which block size is used. I don't think this is a reasonable assumption.
Yes, syscalls are expensive and most likely became very expensive with the Meltdown patch. Even more so on OS X, where filesystem drivers are running in their own process.
I am not arguing against performance drops due to both Meltdown path or APFS, but I do not think your data shows the conclusion you are using in the headline.
> The headline is completely bogus.
One of your statements is therefore bogus.
Or the article.
Or the referenced (linked) article.
Seriously: a rebuttal of what you wrote is easy, but amounts to rephrasing the article.
> As part of the upgrade process, the macOS High Sierra installer will automatically convert an SSD to the new APFS
At the moment, the price of instances goes up linearly relative to CPU core count - Maybe in part because the performance overhead of virtualizing a single 32-core machine into 16 2-core machines had been minimal. But now that the performance overhead of virtualisation is higher (due to isolating the CPU cores being more expensive), maybe it's more efficient to lease entire CPUs (with all of their cores) without the patch (and associated overheads).
Virtualization already has some known minor performance overheads but now these patches will add even more overheads.
Can cloud providers keep pretending that 4x 2-core virtual instances are as performant as a single physical 8-core instance?
By all means patch my browser that runs random js. I'm not in the habit of downloading and running untrusted binaries - those that do have far greater problems than Meltdown or spectre already.
Of course I want it patched on AWS, and my bank's backend machines, but you are now forcing me to continually pay, hour after hour, for insurance against a thread that I'm happy to not worry about.
Do not steal my cycles to pay for 'security' I do not require.
I'll be searching for ways to disable most of these mitigations too after the dust settles. I rarely run untrusted code, and for that I can probably find a way to run it securely depending on the perceived threat. A lot of what I do on my main computer is syscall heavy, and I don't like 25-50% performance hit.
Meltdown is a much smaller (but not zero ) security risk on TempleOS than it is on Windows or Unix and Unix-like systems.
 At first one might think there is no risk, because TempleOS runs everything at ring 0 in a single address space so anything that might be exposed via Meltdown is already wide open. That would be true in the case of running untrusted binaries. Where Meltdown would still affect TempleOS is in in the case of trusted binaries being exploited via techniques such as return oriented programming. Meltdown could make more "gadgets" available in the binary, increasing the chances that someone could make it read something it otherwise would not have read.
Im perfectly content with a several year old laptop for what I do but I realize that the performance hit from this patch is going to force my hand on an upgrade.
Lets face facts, intel hasn't seen a lot of real gains for real world usage in a few years - this change adds bloat to software where it simply wasn't being generated before.
Note: im not suggesting the issue was intentional but this unintended consequence has an upside for someone, the question is who.