I agree, the idea of fiddling with hopeful compiler command line switches or throwing oddball intrinsics like pixie dust all over your code is a fools errand.
The last word in this article is an architectural recommendation/option that seems the right one to me:
> Removing sensitive information from memory
> Another technique that can be used to mitigate speculative execution side channel vulnerabilities is to remove sensitive information from memory. Software developers can look for opportunities to refactor their application such that sensitive information is not accessible during speculative execution. This can be accomplished by refactoring the design of an application to isolate sensitive information into separate processes. For example, a web browser application can attempt to isolate the data associated with each web origin into separate processes, thus preventing one process from being able to access cross-origin data through speculative execution.
So, one should sandbox her sensitive data into a separate processes and allow the Operating System to do its job of isolating the processes. As new hardware attacks become known and understood, the OS vendors will eventually patch their systems to enforce and strengthen the boundary contract that is supposed to exist between processes.
I understand that the low-level recommendations in this article are probably salient to systems-level developers (who comprise a large percentage of C++ devs), so maybe I shouldn't be so irked to see the geek stuff listed front and center.
"Chandler, who leads the C++ and LLVM teams at Google and is one of the most popular speakers at CppCon, will tackle the new class of vulnerabilities in modern CPUs with his talk Spectre: Secrets, Side-Channels, Sandboxes, and Security"
C and C++ are systems-programming languages which get used for kernels/drivers/etc so these guidelines absolutely do fall into the responsibility of C/C++ programmer.
In the embedded world we have a saying that software always fixes hardware. This is generally true as many hardware issues result in software workarounds.
A problem from spectre could show up literally anywhere, in any package in an embedded distro, kernel or userspace. Good luck.
This is not like a "insert a delay between these two accesses to the PHY and the issue goes away" kind of thing.
Something in the compiler would be appropriate here, rather than the wild goose chase of tracking this down in countless lines of code and guessing at solutions.
How do you even test for this, in all its possible incarnations and scenarios. Never fix something if you don't have a test plan.
The Microsoft compiler can do some level of automated mitigation. But fundamentally it is a static analysis that cannot cover all cases. You do need help from the compiler to de-speculate loads appropriately (GCC and Clang will feature these intrinsics). But only the programmer knows the true security model the program exists under.
Spectre V1, quite fundamentally, means that branch conditions are not necessarily a legitimate security boundary anymore. That's really all there is to it. Either you have to accept this and fix your code, or not fix it and say the threat exists outside your model. Even leveraging newer hardware features (e.g. perhaps segregating cache regions like Intel CAT) will likely require software integration if they wish to retain speculative capabilities, and this seems likely.
We also cannot test for all possible side channel attacks in highly error-prone and detailed cryptographic algorithms, and new ones are found. But we manage through many mechanisms to try and mitigate the things we know about (through various static and dynamic methods, and enhancing our security and programming models). These all require coordination on the programmers behalf, still.
It's not necessarily our fault this is all happening. It's unfortunately our responsibility to fix and mitigate it. And, unfortunately -- it's also another thing on top of the 200 other things a C/C++ programmer/systems programmer writing security-relevant code will have to understand.
You missed the point. The post I replied to implied that C/C++ people shouldn't have to worry about this and that they should not take account of it.
My point was that C/C++ is used in privileged locations (i.e. kernel & drivers) and (although spectre itself is a user-mode exploit) as such it is perfectly within its domain to be concerned about such things.
Since it is impractical for hardware to be changed, the only pragmatic solution is to either a) live with it b) have software do its best to reduce the likelihood of you being bitten. Practically, that means s/w mods.
Test-wise; I can imagine a new class of static-analysis tools that may be able indicate worrisome areas for known hardware exploits. None that I've heard of so far tackle these type of issues.
So we can give some responsibility to the low level people, which is all well and good - but then what? We’ve already started seeing some of these types of attacks against JavaScript engines too, so it’s a good guess that other languages are similarly vulnerable, as this isn’t a software bug.
C++ programmers already have to worry about cache timing and sizes to make secure code (both data and code), have to worry about how integer overflow works (hardware dependent for signed), have to worry about data sizes, have to worry about integer shifting and how it varies across hardware, have to worry about relative speeds between multiplication, division, subtraction, and addition on your hardware to avoid timing attacks, and on and on. This is just another case where C/C++ makes it very hard to write secure code.
Lots of what I listed are hardware exploits - cache timing leaks information, integer shifting and overflow behavior are hardware dependent, stack issues and exploits are hardware dependent, and to make secure C/C++ code on a specific target takes understanding of all these issues. This is only the tip of the iceberg.
There's lots more, depending on platform. It's nearly impossible to make C/C++ code robust across platforms to all of these. Chip errata (Intel, AMD, etc.) list thousands more hardware bugs, many exploitable to some degree, and every platform has them. Hypervisor, secure instructions failing, lazy fp restore, Intel Management Engine, biased (or backdoored) RNG generation, and on and on all have security issues.
Yes, Spectre is a pretty big one, but so we/are many of the others, and the only reason many people think Spectre is the only one that is important is because they are unaware of how prevalent hardware holes are.
I suspect integer overflow alone has caused more severe security issues than these speculative execution attacks will get close to doing. But all these exploits exist because of hardware bugs or designed behavior that C/C++ programmers are unaware of or ignore.
The point is it's trivial in the language to write incorrect code that behaves differently on different hardware. To write robust code you need to know how the underlying hardware behaves, how it can vary across platforms, and write code very carefully to ensure you write code that operates properly on the hardware.
All these issues are consequences of the underlying hardware, including hardware errata and bugs.
>This is a different class of problem from that of correct code being turned into a security problem by the hardware.
Where would you put cache timing attacks then? It's "correct" code, the hardware performs as intended, yes the error is a result of the code + hardware interaction in a manner the programmer did not understand.
All of these are related: programmer + language + hardware interact to make code that is exploitable. C/C++ in the mix makes this harder to do correctly.
>however I don't see any of the possible replacements achieve a similar position in my lifetime
Agreed. I've been doing C/C++ code for decades, and still do it, but routinely work on other languages. For many problems the best solution is still C/C++, but it makes security for such code significantly more trouble.