
Intel Performance Hit 5x Harder Than AMD After Spectre, Meltdown Patches - mepian
https://www.extremetech.com/computing/291649-intel-performance-amd-spectre-meltdown-mds-patches
======
listenandlearn
Things just keep getting better and better for AMD... it’s like once you have
a little bit of luck things just keep snowballing from there.

They are on track to have an amazing year with CPUs that may take a very
significant chunk of the cloud compute away from intel and then maybe even on
the performance side.. it’s just fascinating that a company that almost died
just a few years ago is now a big contender on multiple fronts.

~~~
tasty_freeze
It wasn't just luck. It was Jim Keller.
[https://en.wikipedia.org/wiki/Jim_Keller_%28engineer%29](https://en.wikipedia.org/wiki/Jim_Keller_%28engineer%29)

~~~
SOLAR_FIELDS
Hilariously, the one takeaway I have from that article is that no one wants to
lead the autopilot arm of Tesla.

~~~
Traster
It's certainly not a great sign that less than 1 month before Musk is parading
around screaming that they've got self-driving solved, the head of the project
has jumped ship.

~~~
auiya
Also - [https://www.cnbc.com/2019/05/17/tesla-shares-fall-on-
report-...](https://www.cnbc.com/2019/05/17/tesla-shares-fall-on-report-
autopilot-system-was-engaged-during-crash.html)

------
InTheArena
At some point we need to start pointing the blame directly at Intel here. It’s
becoming more and more obvious that this isn’t a problem where modern CPU
architecture didn’t anticipate a security attack vector — but more that Intel
took shortcuts with security infrastructure on the chip in order to improve
their IPC count.

------
wtallis
Original source: [https://www.phoronix.com/scan.php?page=article&item=intel-
md...](https://www.phoronix.com/scan.php?page=article&item=intel-mds-
xeon&num=1)

~~~
aristophenes
Not exactly the right link, that is for the server chips. This one is for the
consumer CPUs: [https://www.phoronix.com/scan.php?page=article&item=mds-
zomb...](https://www.phoronix.com/scan.php?page=article&item=mds-zombieload-
mit&num=1)

------
alkonaut
As far as I understood, the solution to this in sandboxes such as the js
world, is simply to deny anyone using timers with a resolution that could
reveal cache misses. How much is software really relying on timers with this
resolution? What would it mean if CPU manufacturers simply gave up and said
"to mitigate side channels, you can't have a clock that is so accurate that it
lets you measure whether X has happened because knowing that is equivalent to
reading any memory".

Or, instead of detecting various things and flushing out sensitive data on
some context switch, the CPU just adds noise to the timers instead? I'm
gussing this is a complete no-go, but I'm wondering why it is?

~~~
ss248
If you really want to, you can "manufacture" high resolution timer pretty
easily with thread spinning.

~~~
alkonaut
I suppose then "adding timing noise" here would also require making sure
instructions don't have fixed and dependable execution times, because then you
can just manufacture a clock by incrementing a number and knowing how many
clock cycles the increment is. So an increment cannot be a known number of
cycles. It does sound messy.

~~~
amelius
But then you'd have to increase execution times, and here we are ...

~~~
alkonaut
That's true of course. So basically adding timing noise is equivalent to
adding artificial slowdowns. The only upside I suppose is that it might solve
_all_ timing sidechannel attacks in one go. So it's not 3% for one and 4% for
the next and so on. It's a one time cost to disable timing as an attack
vector.

~~~
throwaway2048
Adding randomness doesn't solve the issue, it just slows it down somewhat.
Fast operations are still going to be faster on average, etc.

------
rwmj
Does anyone have the full kernel command line to turn all of the mitigations
off? I don't need this on my private development server which runs all code
either written by me or under my control.

~~~
pabs3
[https://make-linux-fast-again.com/](https://make-linux-fast-again.com/)

~~~
afroboy
That's why average joe doesn't like Linux. how i'm going to use this?

~~~
kurtisc
Maybe you'll prefer the Windows version:

    
    
      reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 3 /f
    
      reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f

~~~
SamuelAdams
Source for this is here [1].

[1]: [https://support.microsoft.com/en-us/help/4072698/windows-
ser...](https://support.microsoft.com/en-us/help/4072698/windows-server-
speculative-execution-side-channel-vulnerabilities-prot)

------
Skunkleton
What are we supposed to draw from those graphs? The AMD part is already much
slower than the intel part.

~~~
bearjaws
I mean, a 15% reduction in effective computation is not nothing just because
AMDs CPUs aren't faster. The 8700k has 2 less cores and costs ~$100 more, so
now it is not a very compelling purchase if you plan to turn mitigations on.

Many people bought the 8700k for a level of performance, that they now need to
compromise for: 1) higher performance, small chance of exploits, 2) lower
performance and less exploits.

This dilemma did not exist until this year, and now everyone is paying for it,
besides Intel.

~~~
Skunkleton
The point is that it isn't comparable. It is likely that one of the reasons
that the measured intel parts are so much faster is because of insecure
optimizations. It would have been a much more interesting comparison to start
from two SKUs with similar performance.

~~~
AnthonyMouse
Isn't that what they've done? The 6800K had been slightly faster than the
2700X, now it's significantly slower. The 8700K had been significantly faster
than the 2700X, now it's slightly slower (and yet more expensive). The same
for the 7980XE and the 2990WX.

------
joaomacp
If I'm not wrong, these are kernel patches. Intel is adding hardware (they
call it in-silicon) mitigations to Meltdown (in Coffee Lake [1]) and Spectre
(in Ice Lake[2]).

This should help increase performance again, right? I'm actually waiting for
Ice Lake before buying a new laptop.

[1] [https://www.tomshardware.com/news/intel-9th-generation-
coffe...](https://www.tomshardware.com/news/intel-9th-generation-coffee-lake-
refresh,37898.html) [2]
[https://en.wikipedia.org/wiki/Ice_Lake_(microarchitecture)](https://en.wikipedia.org/wiki/Ice_Lake_\(microarchitecture\))

------
hinkley
What’s the cost of interprocessor communication versus clearing these caches
so aggressively?

I wonder, as cores continue to increase, if another solution will present
itself in the form of segregating traffic per processor.

~~~
viraptor
Those who care already do this. NUMA is taking advantage of memory locality by
itself, and processes can be pinned to cores manually as well. But even
reading memory of your own process can be an issue (it is for browser tabs)

~~~
hinkley
The browser is providing OS services so if those services are “broken” they
have to be fixed twice. Wouldn’t one process per origin and letting the OS do
its job reduce the surface area here?

------
JoeAltmaier
Admitting all the issues with this 'attack' as given, I can't help thinking
this is only an issue because, CPUs use the old 'secrecy as security' model.
E.g. if we add to each thread an encryption key used to interpret every memory
access (just an xor with a rolling mask) then access to somebody else's memory
become access to an encrypted document with a key you don't know. No longer a
problem?

~~~
zrm
These are really timing attacks. It's not a matter of directly reading the
data, it's a matter of being able to deduce what's there based on how long it
takes to do it.

~~~
JoeAltmaier
Yes; but that would be irrelevant if they didn't depend on secrecy for
security, right?

~~~
zrm
Ah, you're proposing a solution for Meltdown rather than Spectre. The
difference is that with Meltdown there is speculative access to memory that
isn't even mapped to the original process.

Memory encryption might work as a way to solve that, but we already know how
to solve that -- do what AMD and others have been doing the whole time and
don't continue speculative execution through a page fault.

~~~
JoeAltmaier
...until somebody finds another vector to reveal the value of Kernel memory.

The mantra is, secrecy is not security. The User/Kernel dichotomy flagrantly
ignores this principle.

------
lostmsu
The title is clickbait, because 5x does not matter if the AMD's difference is
under 1%. Also not the original source.

~~~
Dylan16807
Clickbait because it could be misleading in a counterfactual?

15% vs. 3% is pretty meaningful. 15-16% is comparable to the gap between 4th
and 9th generation Intel processors at the same frequency. And in that time
turbo has gotten better but base clocks have dropped.

~~~
makomk
Also, I think fully mitigating the security issues on Intel processors
requires applying all the patches _and_ disabling hyperthreading, at which
point you're seeing more like a 20+% impact on performance. In particular,
from what I can tell the microcode and software patches for RIDL and
ZombieLoad only flushes the state on context switches and cannot protect
against the hyperhreading-based variants of the attacks.

~~~
dijit
Loss of hyper-threading is only "20%+" because you put a plus there.. Hyper-
threading in synthetic workloads is usually in the order of 40% loss in
performance. Depending on how well used the hyper-threads are it can actually
be higher.

It's unlikely to be lower than 40% unless you're bottlenecked before the CPU
(memory speed or heavy CPU cache invalidation)

~~~
maffydub
The article says...

"Disabling [hyperthreading] increases the overall performance impact to 20
percent (for the 7980XE), 24.8 percent (8700K) and 20.5 percent (6800K)."

...so I think a performance impact of "20%+" (i.e. 20% or more) is fair - for
the 8700K it's 24.8%.

