
Code-Pointer Integrity - jcr
http://dslab.epfl.ch/proj/cpi/
======
vezzy-fnord
Ostensibly, "control flow hijack attacks" is a recently coined umbrella term
for the various standard non-managed memory corruption vulnerabilities like
buffer, heap overflows and uncontrolled format strings.

The main target for so-called CPI then appears to be LLVM's SafeStack:
[http://clang.llvm.org/docs/SafeStack.html](http://clang.llvm.org/docs/SafeStack.html)

~~~
nickpsecurity
At one point, most of the research focused on the memory safety property as a
whole or many tactics for protecting memory/pointeres in narrow ways. The
first had too much of a performance hit or required different hardware. The
second wasn't good enough because, like you said, a whole umbrella of attacks
existed that achieved same goal. Eventually, this paper [1] coined "Control
Flow Integrity" as the property of protecting control flow rather than all
memory issues. It cites lots of other work that specifically tries to protect
control flow. The general idea seems to be that stopping code injection is
highest priority and methods that focus just on that might have less
performance hit than full memory safety. Plus, if you focus on preserving a
property in all situations, then the individual ways to exploit a lack of that
property no longer matter. Unless they're things that destroy your whole
model. ;)

At least, that's the impression I received from reading most of the papers. A
few were clever enough to use segments like the old high assurance A1 systems
did. CPI enforces a similar property on pointers with an interesting design
and segments in its strongest form. Resulting performance penalty is
negligible despite a large number of attacks stopped by it. This field is rife
with failures so it wouldn't surprise me if more issues are found. Yet,
combined with the bug bounty, the weaker SFI model used in Chrome has worked
well enough. The stronger, CPI might deliver after enough analysis and fixes.

Note: While looking that up, I found a recent CFI scheme [2] by Criswell et al
that builds on their SVA-OS scheme. It supports my assertion about why CFI
exists with this claim:

"Where comparable numbers are available, the overheads of KCoFI are far lower
than heavyweight memory-safety techniques."

[1]
[http://research.microsoft.com/pubs/64250/ccs05.pdf](http://research.microsoft.com/pubs/64250/ccs05.pdf)

[2] [http://sva.cs.illinois.edu/pubs/KCoFI-
Oakland-2014.pdf](http://sva.cs.illinois.edu/pubs/KCoFI-Oakland-2014.pdf)

------
nickpsecurity
This is pretty neat research that replaces prior SFI work with a mechanism
that enforces a stronger, verified security claim. I especially endorsed the
segment method as it was less likely to have an easy bypass. I posted it on
Schneier's blog and elsewhere as an example of good research that tries to
solve a root problem (i.e. enforcing invariant) rather than tactically
countering every manifestation of it. I'd like to see more review of and
enhancements to it by the types of people that broke prior, SFI schemes.

Remember that this method can be combined with other techniques to reduce
overall risk. For instance, this combined with interface protection for input
and a separation kernel approach for small TCB would give the attackers less
to work with. Critical algorithms on data pointers might be proven with
Astree. And so on.

~~~
cvwright
Very cool work.

> I'd like to see more review of and enhancements to it by the types of people
> that broke prior, SFI schemes.

I only glanced at the abstract so far, but it looks like they don't protect
against non-control data attacks [1]. For example, the adversary might still
be able to whack the int that stores the userid and change it to some more-
privileged value.

[1] [https://www.usenix.org/conference/14th-usenix-security-
sympo...](https://www.usenix.org/conference/14th-usenix-security-
symposium/non-control-data-attacks-are-realistic-threats)

~~~
nickpsecurity
I agree. It's a major risk area. It's why most of my work and evangelism is on
stronger stuff, esp in hardware. Any of these CFI type things are taking in
risk to try to make stuff fast enough to be adopted on legacy systems. Thanks
for the paper as I'm sure it will come in handy in a future discussion on this
stuff.

Here's a more thorough solution with the kind of performance hit I talk about:

[http://www.cis.upenn.edu/acg/papers/pldi09_softbound.pdf](http://www.cis.upenn.edu/acg/papers/pldi09_softbound.pdf)

Better than predecessors I read about, though. On hardware side, the SAFE team
(crash-safe.org) have enforced several policies simultaneously on their
modified processor. Was quite a performance hit but unoptimized and with many
protections. Burroughs B5000 (below) had a bit for pointer protection, a bit
for code vs data separation, type checks in hardware for procedure calls,
array bounds-checking in hardware, and much applied by compiler from HLL to
binary. I'm sure most of that could be implemented with simple hardware on
modern processes. Finally, I've seen a taint-checking approach that only took
two bits, had under 10% performance hit, and was Linux compatible IIRC. So,
there's methods with high promise if you ditch the legacy system but people
don't want to do that...

[http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...](http://www.smecc.org/The%20Architecture%20%20of%20the%20Burroughs%20B-5000.htm)

