“The long and short of matters then is that based on the testing we've done thus far, it doesn't look like Coffee Lake Refresh recovers any of the performance the original Coffee Lake loses from the Meltdown and Spectre fixes. Coffee Lake was always less impacted than older architectures, but whatever performance hit it took remains in the Refresh CPU design.”
For any CPU designed with the expectation of using the old method of memory access prediction without any protections... can we expect they'll ever show a significant performance recovery?
I guess I always assumed the answer was no.
The way you avoid some of the impacted scenarios (at modest performance impact) is with additional hardware or microarchitecture changes.
Basically, the task is 'Ensure processor state, as observed by another process, never changes because of speculative execution branches.'
Which is a high bar to meet, especially if you want to simultaneously optimize your execution unit utilization.
1: https://www.anandtech.com/comments/13659/analyzing-core-i9-9... - screenshot comes from anandtech.com
In the case of meltdown, it seems unlikely the CPU is maintaining its own shadow page table. How would it do that?
It should be noted though that at the time neither sharing processors with strangers / across trust boundaries nor executing arbitrary crap in a VM were common activities. Memory protection and such were mostly viewed as a technique to increase reliability, not to provide actual security.
Accordingly how much they knew about the nature of it and their ability to predict is really the question and IMO kinda a hard one to know (unless there are some memos out there) / judge.
Granted in the age of little to no consideration given to security in so many places .... I wouldn't be surprised by anything.
I'm reading that workstations might not need to worry for the most part for example, unless a package gets compromised or some browser exploit makes it through
Probing the cpu for timings of memory access you don't have access to, or forcing it to something somewhere where you do have access to, you don't need the kernel for that. Thats the problem.
Except now it's a bit worse since I cannot disable the patch to recuperate lost performance.
Knowing the actual, demonstrated risks... why would you do this?
I'm not trying to devalue your position. I'm trying to understand your risk calculation.
Edit: Good catch, humans. The thought of running code in an unexposed, isolated, largely trusted environment didn't cross my mind; I was moreso focused on the environments I'm used to (where everything is connected and nothing is trusted). That said, I'd argue that a database backend to any typical webapp definitely qualifies as exposed.
The calculation is basically:
their_waste = likelihood enough people have the mitigations enabled (not tech enough to disable them) so that "bad people" will not waste time developing exploits for the tiny number of unprotected people like me (herd protecting me)
my_risk = likelihood of "bad people" actually finding me and being able to run their code
their_reward = likelihood of them actually finding something meaningful and valuable in the memory they can manage to dump
oops = (my_risk * their_reward) / their_waste
I am assuming my_risk and their_reward to be low and their_waste to be high, so oops will be acceptably low (hopefully :p)
Wish me luck!
The most risk comes from web browsers as you can execute (constrained) attack code in those. That's why browser vendors were busy disabling SharedArrayBuffer real quick and developed further mitigations that hinder exploiting CPU bugs like these.
I may reconsider when browser-based exploits become a real thing that is in widespread use.
People probing my ports and exposed services running on my machines is far less of an issue since I don't run any service designed to run attacker supplied (but sandboxed) code like a browser does. If somebody managed to run code anyway (RCE) then I probably would have other problems than just worrying about somebody running spectre exploit code ;)
As it stands right now, the prime targets are shared execution environments running untrusted sandboxed code, aka cloud providers needing to worry that customer A's VM doesn't dump the memory of customer B's VM running on the same hardware.
If you're running (or using) a service where thousands of businesses rely on the ability to run their code and their data on your machines without any of your other customers being able to access it, yeah, security is priority #1.
On my personal workstation, though, what are they going to get? My credit card number? That's my bank's problem. I'm not particularly worried about targetted attacks, if my competitor or customers got everything on my hard drive little would change for any of us. Force me to restore from backup? Email password would be bad, but that's partially what 2-factor is for.
I have a tiny chance of getting a few hours of inconvenience if someone completely owns my PC. That's not worth all my work happening a little bit slower all the time.
I think it depends on the context. I felt similarly until I discovered just how many machines attackers would pivot through in real-world attacks featuring strong adversaries. Preventing these attacks on every machine is a strength-in-depth measure.
Your data is likely less important in that context compared to your device as a fractional resource or a pivot. (which I believe is largely zerkten's point)
You can't leverage any speculation exploit without code execution, and there's nothing left to exploit on the box once you have a shell.
This whole business is massively, massively overhyped from the point of view of individual workstation users. Not every system needs to be locked down like NORAD. Doing so is a failure of basic threat modeling.
Eg: I thought consumer grade videocards were pretty darn insecure. I don't know where I picked that idea, but if that's true, then the idea of having the option to run "insecurely" for certain things make sense.
Not sure if we can trust an average user with this, but if the videocard thing is true, we do.
For example, how many of the map editors for various games are Turing-complete? If you download a custom map from random peer, you may be executing "sandboxed" code. Can it pull off a timing attack?
Exactly. 15 bits per hour, in an artificial environment with minimised noise, after untold amounts of preparatory work were already performed to analyse the software running on the target machine.
Here, have 15 bytes from a random process running on my machine (I just randomly attached a debugger, scrolled through memory arbitrarily, and copied them):
d1 e1 81 f9 fe ff 00 00 76 05 b9 fe ff 00 00 66 89
What are they? I don't know. Maybe you're really lucky and it's a key to something, or a password hash... but what? The above would've taken 8 hours to read using that attack. Now you should see the level of unconcern I have about this. Someone who is being targeted would care more, but I don't believe me and indeed the majority of users are so important as to be in such a position.
In much the same way I'm not going to install bars over every window of my house.
I imagine Gmail's HTTPS certificate, or a Microsoft code signing key, or Linus' GPG key, or being able to impersonate some government agency or messaging server, are well worth 5 days of this.
The authors of that paper have the massive advantage of knowing exactly the software running on the target system and its environment; something which an attacker in the real world is unlikely to have, unless the attacker already has such familiarity with the system that it seems far easier to exfiltrate data via some other means than trying to find and setup this very slow side-channel. Everything has to be set up just right for this to work. Otherwise you might probably still manage to read something, but it's completely useless.
(High-value private keys in companies are likely to be in HSMs anyway, in which case they're completely inaccessible to attacks like these.)
Let's say you luck out and only need to read the first 100 megabytes of memory... you're talking 1000's of years.
That's amazing, I have the same combination on my luggage!
Perhaps i'm mixing something up but i thought that intel removed smt from the newer generations (is removing it) because of it.
Parent may have simply meant TLBleed/L1TF (Foreshadow) instead of Meltdown/Spectre.
It was specifically about Spectre and Meltdown mitigations which are unrelated to hyperthreading, so testing with or without hyperthreading is fine. Bringing up hyperthreading here is like bringing up how a diet high in salt is unhealthy, someone pointing out that salt has nothing to do with the original article, and then a final comment "Yeah, but that doesn't invalidate the wider point that we should consume less salt!".