In my experience the biggest problem is something that cannot be solved by the kernel anyway, which is the microcode updates. AFAIK it's impossible to get that performance back by any means, even the mitigations=off flag cannot do that.
Maybe look at QubesOS (it runs many desktop processes in separate virtual machines) which will give you defence in depth, and mitigates many other browser attacks as well as architectural ones. You could run your trusted stuff outside Qubes (in the dom0) where it I guess ought to run at full speed.
BTW I solve this problem by having a separate development machine for compiling and testing my own code which never runs anything untrusted. And that other machine is an AMD Zen 2 so it's not quite so vulnerable and also much faster for the price!
It is my understanding that these microcode updates get applied either by the operating system at boot time (applying the update every time you boot), or by the motherboard, if the BIOS was updated. As only this last method is "permanent" (or at least it requires you to downgrade your BIOS, something not always feasible) if you never upgraded your BIOS, I guess it should be possible to prevent the OS from applying the updates.
It's usually upgraded by a BIOS update or by Windows update. Windows update send driver/firmware updates, security patches or general updates if the vendor wants to support that way of distribution.
I think it's quite device specific and method specific whether the microcode can be upgraded and downgraded the same way.
If I can’t do that and I have to choose between fast/insecure or slow/secure I’ll take my chances with fast/insecure. I’ll rather give up doing anything sensitive on my machine than give up 5% perf.
Or can a properly designed attack break out of the VM and hardened kernel?
A lot has changed since then.
> Hence, it will not be possible to transmit information over a covert communication channel at a high enough bandwidth to make such attempts worthwhile.
Skimming the citations though, I'm not 100% sure it's the same thing as Kocher (1994) which has a more direct line to Meltdown/Spectre. The idea that intervals of high-precision clocks could carry information is the same, but especially the KVM/370 paper seems concerned with it being used for unmonitored communication between two malicious actors, or as a tool to learn something generally about what other users of the system are doing, not exfiltration of the the stored data itself across a security boundary with an oblivious user. The 1991 paper seems to sit somewhere in the middle.
The problems of timing attacks in shared resource systems were well understood well before the Spectre mess.
Speculative Dereferencing of Registers:Reviving Foreshadow
Martin Schwarzl, Thomas Schuster, Michael Schwarz, Daniel Gruss
You can just lower the signal to noise ratio by various means, so you gain some time until better statistical methods or clever tricks filter out the signal again.
Isolated processes being able to determine the memory or computation of another process is something else altogether and surley could be mitigated, even if the mitigation comes at a cost.
The idea here is to essentially create a branch delay slot instruction, which then can be used to set the latency of a branch such that it doesn't require prediction to not stall the pipeline. Like so:
basicblock 5 # next five instructions WILL execute, branches after that, if any
-- branch takes effect here
Architectures with ISA-specified delay has significant issues as you mention.
the measurements show that they can generate bb of about 10~20 instructions (optimistic numbers as they measure an average of 5) which allows them to move up the branches of about 10~20 instructions. As with this ISA the bb determines the bound of the instruction window, then the instruction window of this ISA is limited around 20~40 instructions (current bb plus next bb).
But modern superscalar processors have a instruction window > 100 instructions to provide high performance.
The low performance loss of their model compared to the branch prediction model may perhaps be explained by the fact that they use an in-order CPU that makes very little use of the ILP (Instruction Level Parallelism).
Moreover, it only addresses the problem of speculative execution, but there are other types of transient execution.
There aren't any major new exploits detailed in this paper. They introduce a slightly new gadget for exposing data, but it can be and is mitigated by existing techniques (e.g. retpolines). The only noteworthy aspect is that, as it regards SGX, the mitigations haven't yet been generally applied. But new ways to break SGX are a dime a dozen these days.
Interesting and rigorous work, but there don't seem to be any real implications here. It's more like a more concise restatement of researchers' present understanding, using the benefit of hindsight and some additional footwork to fill in some small gaps.
> We demonstrate that these dereferencing effects exist even on the most recent Intel CPUs with the latest hardware mitigations, and on CPUs previously believed to be unaffected, i.e., ARM, IBM, and AMD CPUs
That's different than saying it's effective even with mitigations required for other Spectre-related exploits. When larger problems arise and more general mitigations applied that also happen to be better mitigations for previous, narrower exploits, people don't usually make much effort to review and revise old papers.
AFAIU, discovery of Meltdown slightly predates Spectre, or at least the point at which the implications began to blow up. The Meltdown and Spectre papers were published the same month Foreshadow was discovered and privately reported. It seems two researchers were involved with both Foreshadow and the earlier Spectre work, but that doesn't mean they would have or should have fully grasped the deeper relationships. And all of this happened over 2 1/2 years ago. Since then researchers' understanding of the underlying issues has improved greatly.
The tone of the paper is, I think, problematic. Just read footnote #1, which self-defensively says: "Various authors of papers exploiting the prefetching effect confirmed that the explanation put forward in this paper indeed explains the observed phenomena more accurately than their original explanations. We believe it is in the nature of empirical science that theories explaining empirical observations improve over time and root-cause attributions become more accurate."
So their presentation is "more accurate". And the writers of the 2+ year-old papers readily admit it. All of which is another way of saying these were already accepted, if not yet concretely expressed, beliefs in the research community. There's much value in putting pen to paper and running confirmatory experiments. But that doesn't make it groundbreaking.
EDIT: Regarding "We demonstrate that these dereferencing effects exist even on the most recent Intel CPUs with the latest hardware mitigations, and on CPUs previously believed to be unaffected, i.e., ARM, IBM, and AMD CPUs". If you read closely it's clear that the context is Foreshadow, mechanism and mitigations. They're saying that when Foreshadow was published those architectures were believed immune. But applying the principles of their "more accurate" understanding you can in fact achieve Foreshadow (or Foreshadow-like) side channels on those architectures even with Foreshadow-specific mitigations. But, again, only subsequent to the initial discovery of both Foreshadow and Spectre did it became known that those architectures were more susceptible to speculative execution attacks than originally understood. Thus elsewhere in the paper it's admitted that more general and modern Spectre mitigations also prevent these "new" Foreshadow exploits.
I'm under the impression that the side channels created by caches and speculative execution have been known publicly (but with limited reach/impact) as far back as the 90s. E.g. the 1991 paper "A Retrospective on the VAX VMM Security Kernel" mentions data security problems and the creation of side channels when processor caches are used.