The last word in this article is an architectural recommendation/option that seems the right one to me:
> Removing sensitive information from memory
> Another technique that can be used to mitigate speculative execution side channel vulnerabilities is to remove sensitive information from memory. Software developers can look for opportunities to refactor their application such that sensitive information is not accessible during speculative execution. This can be accomplished by refactoring the design of an application to isolate sensitive information into separate processes. For example, a web browser application can attempt to isolate the data associated with each web origin into separate processes, thus preventing one process from being able to access cross-origin data through speculative execution.
So, one should sandbox her sensitive data into a separate processes and allow the Operating System to do its job of isolating the processes. As new hardware attacks become known and understood, the OS vendors will eventually patch their systems to enforce and strengthen the boundary contract that is supposed to exist between processes.
I understand that the low-level recommendations in this article are probably salient to systems-level developers (who comprise a large percentage of C++ devs), so maybe I shouldn't be so irked to see the geek stuff listed front and center.
"Chandler, who leads the C++ and LLVM teams at Google and is one of the most popular speakers at CppCon, will tackle the new class of vulnerabilities in modern CPUs with his talk Spectre: Secrets, Side-Channels, Sandboxes, and Security"
In the embedded world we have a saying that software always fixes hardware. This is generally true as many hardware issues result in software workarounds.
This is not like a "insert a delay between these two accesses to the PHY and the issue goes away" kind of thing.
Something in the compiler would be appropriate here, rather than the wild goose chase of tracking this down in countless lines of code and guessing at solutions.
How do you even test for this, in all its possible incarnations and scenarios. Never fix something if you don't have a test plan.
Spectre V1, quite fundamentally, means that branch conditions are not necessarily a legitimate security boundary anymore. That's really all there is to it. Either you have to accept this and fix your code, or not fix it and say the threat exists outside your model. Even leveraging newer hardware features (e.g. perhaps segregating cache regions like Intel CAT) will likely require software integration if they wish to retain speculative capabilities, and this seems likely.
We also cannot test for all possible side channel attacks in highly error-prone and detailed cryptographic algorithms, and new ones are found. But we manage through many mechanisms to try and mitigate the things we know about (through various static and dynamic methods, and enhancing our security and programming models). These all require coordination on the programmers behalf, still.
It's not necessarily our fault this is all happening. It's unfortunately our responsibility to fix and mitigate it. And, unfortunately -- it's also another thing on top of the 200 other things a C/C++ programmer/systems programmer writing security-relevant code will have to understand.
My point was that C/C++ is used in privileged locations (i.e. kernel & drivers) and (although spectre itself is a user-mode exploit) as such it is perfectly within its domain to be concerned about such things.
Since it is impractical for hardware to be changed, the only pragmatic solution is to either a) live with it b) have software do its best to reduce the likelihood of you being bitten. Practically, that means s/w mods.
Test-wise; I can imagine a new class of static-analysis tools that may be able indicate worrisome areas for known hardware exploits. None that I've heard of so far tackle these type of issues.
The biggest difference, is that because of the issues you listed, systems programmers are more understanding to the problem.
Lots of what I listed are hardware exploits - cache timing leaks information, integer shifting and overflow behavior are hardware dependent, stack issues and exploits are hardware dependent, and to make secure C/C++ code on a specific target takes understanding of all these issues. This is only the tip of the iceberg.
There's lots more, depending on platform. It's nearly impossible to make C/C++ code robust across platforms to all of these. Chip errata (Intel, AMD, etc.) list thousands more hardware bugs, many exploitable to some degree, and every platform has them. Hypervisor, secure instructions failing, lazy fp restore, Intel Management Engine, biased (or backdoored) RNG generation, and on and on all have security issues.
Yes, Spectre is a pretty big one, but so we/are many of the others, and the only reason many people think Spectre is the only one that is important is because they are unaware of how prevalent hardware holes are.
I suspect integer overflow alone has caused more severe security issues than these speculative execution attacks will get close to doing. But all these exploits exist because of hardware bugs or designed behavior that C/C++ programmers are unaware of or ignore.
Those are hardware-dependent consequences of incorrect code either producing or failing to handle erroneous values.
This is a different class of problem from that of correct code being turned into a security problem by the hardware.
The point is it's trivial in the language to write incorrect code that behaves differently on different hardware. To write robust code you need to know how the underlying hardware behaves, how it can vary across platforms, and write code very carefully to ensure you write code that operates properly on the hardware.
All these issues are consequences of the underlying hardware, including hardware errata and bugs.
>This is a different class of problem from that of correct code being turned into a security problem by the hardware.
Where would you put cache timing attacks then? It's "correct" code, the hardware performs as intended, yes the error is a result of the code + hardware interaction in a manner the programmer did not understand.
All of these are related: programmer + language + hardware interact to make code that is exploitable. C/C++ in the mix makes this harder to do correctly.
I have witness them both grow from also-ran attempts to system programming to the place they hold now.
This is the kind of tech that only gets replaced with a generation change or via some nuke effect in architecture like quantum computing.
Agreed. I've been doing C/C++ code for decades, and still do it, but routinely work on other languages. For many problems the best solution is still C/C++, but it makes security for such code significantly more trouble.