Hacker News new | past | comments | ask | show | jobs | submit login

I'm afraid the authors are overclaiming here, because they keep talking about "microarchitectural side-channels" when they really mean timing channels induced by non-constant time execution being dependent on secrets.

Unfortunately micro-architectural side-channels are much, much more than leaking information through timing of victim code, and I thought by 2021 this would now be widely known. For one thing, Spectre variant #2 can allow attacker processes to inject speculative information leaks by polluting the BTB to induce essentially any speculative behavior you want. That renders all compiler-based mitigations basically ineffective, because an attacker can just bypass them.

There are other microarchitectural leaks like L1TF that allows (on some unpatched, unmitigated CPUs/kernels) the ability to speculatively read all of L1 cache, regardless of privilege level. Let alone speculative attacks on the kernel aimed at reading physical memory, which bypasses any in-process you could think of. And of course, this paper doesn't address the fact that memory access patterns are not affected, so the classes of attacks that reverse address translation and ASLR are on the table. I didn't read the paper in detail, but there's always the problem of mis-classifying "secret" data and not applying mitigations everywhere.

Unfortunately, the authors misuse of terminology will cause this paper to be misunderstood. It has much, much narrower applicability than it would appear. It only applies to specific kinds of timing channels. This might be worthwhile to apply to codes like crypto, but honestly, those need to be designed much more comprehensively now, rather than just applying a compiler analysis to the problem.




I don't think they are overclaiming. From the abstract:

> we present Constantine, a compiler-based system to automatically harden programs against microarchitectural side channels [...]

> secret dependent control and data flows are completely linearized (i.e., all involved code/data accesses are always executed).

Their goal is to hide microarchitectural events from a passive or active observer. As a side note, the VUSec research group helped discover microarchitectural side channels such as Rogue In-Flight Data Load [0] and I'm sure they know the difference between earlier work on timing and microarchitectural side channels.

https://mdsattacks.com/files/ridl.pdf


I read that too, and I am mystified that they could make such a broad claim, when clearly their technique applies only very narrowly, i.e. to a process leaking its own secrets by encoding them in its own timing behavior. It's very poorly worded.


Do you--or anyone?--know of a good place to know the status of "what actually is the bounds of safe"? Like, "if you use Linux X.Y with this compiler setting and option set then separate processes are safe" or "if you use Xen X.Y and are a Linux guest--maybe also at some X.Y/whatever--that is safe"... or are we simply at "if you share any CPU hardware at all, you are never safe and should give up hope"?


I think there are so many side-channels we don't know them all. In the limit I think the classification for side-channels has at least four axes:

1. What data can be leaked? (scope)

2. How difficult is it to construct a gadget?

3. What is the signal-to-noise ratio of the channel?

4. What is the bandwidth of the channel?

The original 3 Spectre variants were basically "whole process or whole of memory, easy, tens of dB, and many kilobytes a second".

If you're looking for binary safety w.r.t side channels, I think modern hardware cannot actually guarantee it.


I can't tell if you are saying "without changing compiler flags, you always lose, even across all possible configurations of even a hypervisor" or if you are saying "without having control over some aspect of the attacker (as in, they can't just give you a binary)". I feel like it can't be the former, or you would have just said that instead of trying to procure some kind of mental framework; but that means the answer now is in some explanation of the latter criteria; essentially, the question is "what are the conservative bounds of current-safe?" (which might get smaller if new vulnerabilities are found or might get bigger if people discover some fascinating mitigation), not "what is a subset of things that are absolute-unsafe"... the latter I can find and even sometimes understand, but the former is what is actually useful for people building systems, so I keep hoping to find a guide somewhere.




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: