Commentary by RedHat: https://www.redhat.com/en/blog/understanding-mds-vulnerabili... (https://news.ycombinator.com/item?id=19912108)
Commentary by Ubuntu: https://blog.ubuntu.com/2019/05/14/ubuntu-updates-to-mitigat...
Details of which steppings of which processors are affected by which CVEs: https://software.intel.com/security-software-guidance/insigh...
Chrome Browser response here: https://www.chromium.org/Home/chromium-security/mds
Canonical says that they have those for 14/16/18.04 . But possibly more interesting is the fact that this disclosure has been so well synchronized. How do the relevant players decide what the threshold is for informing other tech companies? How does everyone know what policies that the constituent companies use to prevent early disclosure or unintended disclosure to 'somewhat-less-trusted-employees'? Is this all coordinated by US CERT?
All of it is tightly controlled under an embargo. Who they choose to involve is entirely their decision, and is likely based on previous experience with those parties and their likelihood of leaking. Intel doesn't want these kinds of things to leak before official communication is done, or it's pretty much guaranteed to impact their stock price.
This time around has gone much smoother than the previous ones, though L1TF was pretty good too. L1TF was a little rough with the patching side of things because the patches were finalised a little late.
The various distributions and companies knew that the embargo was due to end at 10am pacific, and were probably (like us) refreshing the security advisories page on Intel's site waiting to pull the trigger on all the relevant processes, like publishing blog pages etc.
And that's an improvement - some 15 years ago, with similar computational loads, most of my tests ran 10-20% faster with the HT off (using 2 core / 2 threads) than with HT on (using 2 core / 4 threads) - there just wasn't enough cache to support those many threads.
If your workload is already well parallelized, then, yes 20% is quite significant. However, working to parallelize properly over 8 rather than 4 has its own costs.
The thing that bothers me most is that 800% CPU and 500% CPU on this processor are roughly equivalent at 5x100%CPU, it makes everything very hard to reason about when planning capacity.
If I had a nickel for every time I had to explain why "You are at 50% CPU now, but you can't actually run twice as many processes on this machine and get the same runtime", I'd be able to buy a large frapuccino or two at starbucks.
Perhaps I'm uninformed though - is there a tool like htop, which would give me an idea of how close am I to maxing out a CPU?
As far as silicon/power it is nice, but IIRC (I am not involved in purchasing anymore) it used to cost over 50% in USD for those 20% in performance when you non-HT parts were common.
It (used to be) my job. Does "because people fall for deceptive marketing, waste money, and then waste my time trying to salvage their reputation" sound better?
It should be up to the VM-as-a-service and browser vendors to flush the cache properly.
“Microarchitectural Data Sampling (MDS) is a group of vulnerabilities that allow an attacker to potentially read sensitive data.”
That is way more serious than stealing cycles.
Still it's fine with no JS and no shared processor time, right?
Dang: "Hyper-Threading technology, as used in FreeBSD and other operating systems that are run on Intel Pentium and other processors, allows local users to use a malicious thread to create covert channels, monitor the execution of other threads, and obtain sensitive information such as cryptographic keys, via a timing attack on memory cache misses."
Also, found elsewhere:
"According to Linus Torvalds and others on linux-kernel this is a theoretical attack, paranoid people should disable hyper threading"
Sure you can. You can do math while another HT is waiting for memory. Sometimes you can even multiplex use of multiple ALUs or one HT can do integer and another can do floating point.
It's actually under high multithreaded load that HT shines, especially if that load is heterogenous or memory latency bound.
Hyper-threading tends to benefit the performance of applications that have not been optimized, and therefore presumably are also not particularly performance sensitive in any case.
It seems that broadly the same principles have been found independently by tons of teams. Expecting that well-financed actors have not explored that field and/or not yield any similar result at this point is completely insane.
Meaning, given the high level of technicality required, it's even doubtful that the embargo protected anybody; it might be that no attacker exist (and I postulate will ever exist) that will be simply waiting for 3rd party disclosure before writing its own exploits in that class. On the other hand, typical security providers monitoring threats in the field might not be aware for a long time of the existence of such vulnerabilities.
Now here arguably the first counter measures are similar to those for L1TF, so hopefully sensitive operators would already have disabled HT. However, it is not very cool to not make them aware of this additional (and slightly different) risk during such a ridiculously long period.
Also: does Intel has competent people working on their shit anymore??? They know the fundamental principles; which is speculative execution on architecturally out-of-reach data, followed by a fault and a subsequent extraction via covert channels of un-rolled-back modified micro-architectural state. The broad micro arch is widely known, so do they really expect that 3rd party security researchers won't found all the places where they were sloppy enough to speculatively execute some code on completely "garbage" data? Or were they themselves unable to do a proper comprehensive review, despite having access to the full detailed design (and despite a dedicated team having been created for that)? In either case, this is not reassuring.
It's particularly weird in this case to suggest that the embargo didn't help anyone, since (1) nobody appears to have leaked these flaws and (2) the cloud providers all seem to have fixes queued up.
Intel claims to have discovered some of these flaws internally, and this is a bug class we've known about (for realsies) for a little bit over a year now, in a class of products for which development cycles are themselves denominated in multiple years, so I'd cut them a bit of slack.
In an ideal world, you should disclose everything and let everyone know so they can take measures against it, but in reality there might be less damage to let the vulnerability continue stay undetected for a few more months while everyone else plans to patch it and release such fixes as it gets disclosed.
I do agree that almost a whole year is, however, a very long time though.
Anything on the CPU level that needs to be done in microcode is incredibly complex, and hard to test.
Interesting point: "MDS is addressed in hardware starting with select 8th and 9th Generation Intel® Core™ processors, as well as the 2nd Generation Intel® Xeon® Scalable processor family." Looks like my 8700K isn't on the list though.
edit: This is mentioned in the paper as well, on page 8
Is there evidence that so few people are working on security at Intel?
Sorry for the chaos but this was a weird edge case. The marketing maybe went overboard this time?
I like how Intel prominently thanks their own employees for finding the bugs and later simply acknowledges the existence of any anyone independent reporters with zero thanks.
Or they're just being awkwardly disingenuous here, that's also a possibility.
Presuming the bytecode interpreter would be "slow enough" and "jittery enough" and "indirect enough" to hamper any attempts at exploiting subtle timing+memory layout bugs like that?
IIRC, Konqueror (of KDE) had reasonably fast bytecode JS engine. I wish the browser was still undergoing fast development, used to be my daily driver for many years.
That said, it would make things harder in practice since you’re introducing an extra indirection level and just making everything slower.
As for interpreters in modern browsers, I’d be surprised if there’s no way to entirely disable the JIT somehow... since most JIT implementations I have seen have an interpreter fallback for debugging and easier portability to new CPU architectures.
For spectre simply having attacker directed control flow was sufficient - so logically almost any scripting language could be exploited.
Same goes for most of the TLB attacks.
Others required native code because they needed to use specific instructions (that aren’t going to be emitted intentionally by any compiler - jit or otherwise).
Does anyone know what this would have been built with?
That's gnarly if true.
But maybe instead of having more cores, we should expose the different execution units within a CPU core to the architectural level? That however brings back memories of Itanium, and the general fact that compilers just can't do static scheduling well enough.
No pictures of your kids that they might not want spilled into a searchable database and used for machine learning to sell them things later in life?
No private or symmetric keys which might be used to impersonate you or eavesdrop on you later?
No in-progress documents which you aren't ready to publish?
No conversations with political allies that you might not want the state to peruse?
No intimate conversations with sexual partners?
If that's true, then I think you have a very different attack surface than most people. I think most people are willing to take a small performance hit not to open up access to much of the data that goes across their CPU, which is not an exaggeration for the combination of attacks which have been published against Intel CPUs over the past 3 years.
At the end of the day the only secure computer is one that's turned off and locked up in a supply closet.
It makes me chuckle to think that my not-so-computer-literate friend whom I gave a Chromebook to is protected from anyone snooping in on Youtube, Hotmail and Youtube running on this toy machine (designed for 9 year olds). There really is nothing to hide there. Meanwhile, people doing important work on proper computers are properly vulnerable to this new Hyperthreading theoretical attack.
I will be interested to find out if there is a drop-off in performance on ChromeOS, e.g. Youtube stuttering whilst the whatsapp web tab updates itself with a new message. If there is nobody complaining then why did we need Hyper-Threading in the first place?
You can run Android apps and run Linux programs.
No. Hyper threading was introduced in Feb 2002. The original single core athlon 64 was Sept 2003. The x2 was 2007.
Also Netburst was not that bad. It was a dead-end, yes, but on some markets it could compete with what AMD had.
Plus implementing SMT is not necessarily extremely easy compared to SMP, especially when you evolve designs.
And anyway, Intel shipped HT way before AMD shipped the Athlon 64 x2...