mov r11, [cookie]
xor r11, [rsp]
xor r11, [rsp]
cmp r11, [cookie]
This makes ret instructions fairly hard to use for rop purposes. Unlike the original design, which xor'd [rsp] directly, this new approach preserves return prediction so it should have a lesser effect on performance. With the changes to reduce polymorphic gadgets in place, this should make ROP attacks significantly less palatable. Also, in the original design, an arbitrary leak made rop attacks feasible as you could just place xor-encrypted return addresses on the stack. With the new design, you need repeatable register control too, assuming the temp register isn't spilled, which raises the bar quite a bit.
Github mirror, for easier review: https://github.com/openbsd/src/commit/e688c2b0648a80551cf735...
"Return oriented programming", which is kind of a dumb name, is the idea of harvesting gadgets from the text of a program and then using them as primitives for a new program. Gadgets are stitched together by the "return" instruction (hence the name ROP). When used by attackers this way, "ret" isn't really "returning" so much as it's being used as an arbitrary indirect jump mechanism.
RETGUARD is more than just an improved stack protector, as explained in the commit message it protects function epilogues that are close to return instructions.
(I started writing a more detailed reply based on the commit description, but there was too much speculation without seeing their source code).
RETGUARD however, is per function.
1. if they can't find a register to load the cookie into, they'll silently skip instrumentation (i'm not sure how that would happen in practice but the silent treatment when omitting a security feature is a non-starter).
2. if they can find such a register then it'll be spilled to the stack and restored in the epilogue, so a normal buffer overflow can control both the xor'd retaddr and the retaddr itself and the only thing standing in the way of exploitation is the secret cookie value - not unlike with Stackguard/SSP.
3. one would think that a per-function cookie is an improvement but... they're shared among threads (in userland) or everything (in the kernel) so infoleaks are just as catastrophic as before (it'd certainly help if someone described a proper threat model for this defense). at least the kernel side should use a per-syscall cookie to make it somewhat resemble an actual defense mechanism (and there's some more described in my presentation).
4. the int3 stuffing before retn must be someone's joke 'cos it sure as hell won't prevent abusing the retn as a gadget. it does introduce a mispredicted branch for every single function return however.
1. We don't silently skip instrumentation. If we can't find a free register then we will force the frame setup code to the front of the function so we can get one. See the diff in PEI::calculateSaveRestoreBlocks().
2. We do spill the calculated value to the stack. This is unavoidable in many cases (non-leaf functions). It would be an optimization to not do this in leaf functions, but this would also mean finding a register that is unused throughout the function. This turns out to be a small number of functions, so we didn't pursue it for the initial implementation.
3. I'm not sure what you mean by the cookies are shared. Do you just mean that they are all in the openbsd.randomdata section? They have to live somewhere. Being able to read arbitrary memory in the openbsd.randomdata section would leak them, yes, though this doesn't seem to have been a problem for the existing stack canary, which lives in the same section. I see that RAP keeps the cookie on a register, which sounds like a neat idea. I'd be curious to see how you manage to rotate the cookie arbitrarily.
4. I'm glad you like the int3 stuffing. :-) We could always make the int3 sled longer if it turns out these rets are still accessible in gadgets that terminate on the return instruction. Have you found any?
Anyway, I'm happy to see your commentary on this. You guys do some nice work! If you have other suggestions for improvement I'd be happy to hear them. You can email me at mortimer@.
2. sure but then this means that RETGUARD is not an improvement over Stackguard/SSP which is not how it's marketed...
3. shared means that entities of a class (threads in a process in userland, every single process/thread in the kernel) see the exact same cookies so leaking a cookie from one entity can allow exploitation by another. this is especially detrimental to the kernel side protection. frequent enough cookie rerandomization can help narrow this channel (RAP has a per-thread cookie in the kernel that is updated on each syscall, and there's some more to reduce infoleaks across kernel stacks, it's all in the presentation).
4. any normal path leading up to the ret is a gadget and int3 stuffing does nothing to prevent that (the underlying logic here is that if one can retarget a return to arbitary addresses then he has already leaked enough information so bypassing the cookie check is a no-brainer too). not only that but in the bsd.mp kernel i just checked, of the 32199 ret (0xc3) bytes only 20236 are actual retn insns, the rest are inside insns. so this int3 stuffing leaves many other instances available. Red Hat tried similar gadget elimination a while ago but noone's using the gcc feature as far as i can tell.
Just leaving these here...
When the BDFL of one's kernel says something like that, combined with how radioactive the community interaction seems to have been in the past, the notion that one might get sufficient support or have positive interactions with the wider community while using the grsec kernel fork is dubious at best.
Either way: I'm not saying Retguard is based on RAP --- my question was, "what's the relationship between the techniques". But if there is a relationship, OpenBSD should be explicit about it, so that we can keep track of the evolution of memory corruption countermeasures.
I do not care whether you think people should or shouldn't run grsecurity.
I too am curious what the technical basis is for the patch but your assertion that drama is irrelevant is dangerous when applied to community projects that exist because of the goodwill of their members. grsec should be called out, when mentioned, because of their demonstrated ability to make the code they do produce less than useful because of the encumbrance it carries due to its origin.
I'm asking a research question, not a user question.
Linus has always been a total blowhard when it comes to... everything, but in particular when it comes to security. I wouldn't take his opinions too seriously on the matter.
The fact is that grsec still maintains the state of the art for memory safety mitigations.
CFI still looks more promising to me though. Protecting the CALL part. But this is better than the old gcc/clang stack cookie of course, protecting RET.
Dumb question (potentially): will this make code that is not inlined, calling a function many times, often, run a lot slower?
I found that the runtime overhead was about 2% on average, but there are many factors that contribute.
Meta-CVS has an import feature (mcvs grab) which detects renamed files. It fixes up symlinks pointing to moved files too and such.
Meta-CVS didn't catch on widely because by the time I had it stable, CVS itself was being side tracked by newly emerging projects like SVN.
Plus people were afraid of it being written in Common Lisp.
If you're still using CVS in 2018 and not Meta-CVS, you're ridiculously backwards though. :)
On top of yours, the OpenCM and Aegis programs attempted to meet some of these requirements. Most ignore them. Big, blind spot in software security.
That's really where it's coming from, wasn't really trying to be snarky or anything.
OpenBSD could easily use Git if they wanted to, and they'd never have to touch MS code, or an MS Web property.
Git is used by a comparatively small captive audience; most git users are invested in GitHub. MS is well positioned to run their "embrace, extend, extinguish" play if they wanted to.
Note that there is a mirror on github at https://github.com/openbsd and to my understanding, developers who prefer git use that, but the official source tree is in CVS.
The underlying security of the operating system and user applications running in it has very different risks and benefits versus the integrity of source code commits and who gets to make them.
The latter is something they're equipped to deal with without changing tools. They've decided that the costs of making that technology change aren't worth the benefits that it provides and I mostly agree.
Have you considered CVS may have been finished for over a decade? OpenBSD has been using it for a long time, and it clearly meets their needs, or they'd chose from one of the many other choices.
In my own experience, it's nice to use finished software, step off of the upgrade treadmill, and get to the end of the learning curve.
Feature bloat, totally agree. But even for bug fixes and performance improvements, I have a hard time believing this is truly finished.
Based on the timeline of CVS, I doubt there are that many large performance issues that can be fixed without significant risk of breaking. In my experience, CVS was primarily limited by network and disk I/O, both if which ate generally much improved since the time of active CVS development.
Keep in mind that the effective scope of CVS is also shrinking as many users move on to other software; that means any issues are less likely to surface.
It's not especially active, but you can see the last change sets were in the past year, so "not maintained for more than a decade" doesn't apply to what they're using.
The CVS that OpenBSD uses is still GNU CVS - https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/gnu/usr.bin/cv...
I thought I remembered -current moving to opencvs some time ago, but either I mis-remembered or they moved back.
It would certainly jive with John Gilmore's story on how the NSA worked through the standard bodies to keep IPSEC easily exploitable by making the design too difficult to implement properly:
Their behavior around Simon & Speck and how they refused to reveal details on how exploitable they could be also seems to be similar to their previous tactics.
This is why it's worrisome that Google intends to implement Speck in Android and have pushed it to the Linux kernel, too.
More details on how the NSA has been sabotaging open source projects here:
I believe OpenBSD conducted an audit of their tree when rumours of an IPSec backdoor started and didn't find anything alarming.
It appears that there is a continuous audit of source code. So, even if a malicious hole was planted, it ought to be discovered in the years of repeated auditing. Cheers to OpenBSD!