Hacker News new | comments | ask | show | jobs | submit login

FWIW, the stack vulnerabilities here aren't just a C problem. Most languages, including every language relying on LLVM and GCC until the most recent versions, failed to perform stack probing.

I hesitate to call stack probing "hardening". IMO it's better understood as a failure by compilers to emit proper code in the first place, and it's been a glaringly obvious deficiency for years if not decades.




These are alloa overflows. In the first one, the code tries to put the entire (attacker-controlled) command line on its stack. Compilers can't detect or prevent that, though with ABI support they may be able to detect it before failure


alloca isn't part of C, it's a compiler intrinsic with analogs in other languages. In LLVM it even has it's own instruction. C99 and Fortran also provide dynamically-sized array constructs which historically used alloca internally, and which in any event have similar issues.[1] An alloca overflow is from a language perspective little different than a recursive function blowing the stack.

The proper job of the compiler is to make sure that the generated code doesn't blow the stack in a way that overwrites random memory, whether because of alloca, a large stack-allocated object (of static or dynamic size), or recursion. It's true that blowing the stack in C is undefined whereas in a language like Rust it's supposed to terminate the program.[2] But that's beside the point because Rust didn't actually implement stack probing either and therefore was just as susceptible to these vulnerabilities.

The only sane, acceptable behavior for the compiler is to generate stack probes for any stack allocations that may exceed the page size; not only for alloca, but even for regular, non-array objects which happen to be large. Both programmers and compilers for complex languages like C++, Rust, and Swift aggressively attempt to stack allocate as much as possible, which makes the issue even more acute for those languages. As others have hinted, both alloca and dynamic arrays have been frowned upon in C for a long time (C99 added dynamic arrays principally for the Fortran crowd, who typically consume trusted data, anyhow). The fact that most of the stack smash and stack clash exploits you see are for C is a consequence of most popular software libraries being written in C, and researchers being most familiar with developing PoCs for C-based codebases.[3]

[1] Crucially, C99 dynamic arrays have block-scoped lifetimes whereas alloca allocations have function-scope lifetimes. Meaning calling alloca in a loop is doubly crazy. Early C99 implementations that simply reused the pre-existing alloca intrinsic were buggy. C99 compound literals were also buggy for several years for somewhat similar reasons.

[2] In truth it's de facto undefined in most languages, because most language designers and compiler authors have historically been content to ignore the issue.

[3] To be clear, I'm not saying that C only seems error prone because its popular. I'm only speaking to the particular issue of stack allocation overflow.


In sane languages, being unable to stack allocate the required memory triggers an exception, it does not corrupt it.

You can start with Ada83 as an example of this feature.


Sure. And the function of stack probes is to 1) ensure the stack is properly extended, if possible, or 2) interrupt the program with SIGSEGV (or SIGBUS), if not possible.

Without evidence to the contrary, I would assume that a GCC- or LLVM-based Ada implementation would also be susceptible to stack overflow in the same way--pathological allocation patterns that silently bypass the system's "this could never happen in the real world" assumptions. And just like with C, the fault would lie with the compiler, not the language.

Again, I realize the behavior in C is technically undefined, but its undefined precisely for the reason of permitting the implementation to do the most sane thing for the environment, such as sharing mechanism and semantics with sister languages like Ada or Rust.


Undefined behavior is not for permitting an implementation to do the most sane thing for the environment, that is implementation defined behavior.


I shouldn't have been so quick to agree that stack overflow is undefined behavior. (Not sure who I was agreeing with, anyhow.) Behavior on stack overflow isn't undefined, rather it's simply not considered by the standard at all.


It's all good. :)


Such implementation wouldn't be standards compliant as it wouldn't respect language implementation semantics.

Not every language community is so full of UB love as C and C++ ones, specially those where safety trumps performance in language design.

Naturally if the code segment is writeable and one uses Assembly rewriting as attack vector, then anything goes.


Yes, it would be non-compliant, which leads back to my original point: this particular stack overflow issue isn't about language specification, it's about implementation correctness. Unless GNAT adds stack probes at the front-end or actively subverts even basic optimizations further down the GCC pipeline, AFAIU it's entirely possible for GNAT to compile simple, vanilla Ada code in a way that oversteps the guard page. And this could be so even if GCC didn't aggressively optimize; in fact, it could happen purely as an accident of how objects are ordered on the stack in the absence of optimization, even if all the compiler did was order them as declared.

Indeed, I presume stack probing took so long partly because, short of memset'ing the entire stack frame on entry, ensuring contiguous initialization is non-trivial. But no matter how difficult, I'd bet it's less difficult than proving the generated code is safe without explicit stack probing.[1]

[1] I'm reminded of the infamously brilliant design of Soft Updates for FFS, where the order of operations was meticulously rearranged in the file system implementation and formally proven to result in a stream of atomically consistent disk writes without having to change the on-disk layout. Modifying softdep filesystem code is notoriously tricky. By contrast, a journal is both easier to write and hack on.

EDIT: Perhaps you meant that triggering SIGSEGV would be non-compliant? Stack probes don't necessarily need to touch a guard page. AFAIU on Windows you can just query the TCB for the stack size, but it's substantially faster and in some respects easier to simply trap SIGSEGV (pretty sure Java does this), and the runtime is still free to rethrow SIGSEGV. If you mean stack overflow in Ada is supposed to throw a language-level exception, that's a rather trivial detail that can be accomplished equally well whether probe failures occur inline or asynchronously. In any event, I think my larger point about how to frame the issue and where culpability and responsibility reside still stands.


GNAT is one implementation, among at least 7 existing Ada implementations.

Ada Core, Green Hills, PTC (owns former IBM and Aonix compiler divisions), DDC-I.RR Software, OC Systems.

If the implementation is not able to validate stack size correctness on function entry and throw a stack allocation exception on failure, then it is a compiler bug.

In Ada this is a required runtime check, unless explicitly disabled.

The only way the stack layout would be corrupted, in a bug free Ada compiler, is to explicitly disable such check and make use of unchecked pointers in unsafe code.


Please point to the line of the spec requiring stack probing.


The program in this blog post https://ldpreload.com/blog/stack-smashes-you corrupts memory despite being well-defined C. If you think that program should output what it actually does, I think you need to point to the line of the spec that allows it to behave that way.

Stack probing isn't the only option. Other options include fixed maximum sized stacks, architectures with separate stack and heap address spaces, etc.


Hmm… Stack overflow is certainly meant to be undefined behavior, because even if modern systems really ought to be able to catch it, there are many existing systems and implementations that do not or cannot – and the C standard is intended to be maximally compatible with old implementations, to the point of allowing compilers to only treat the first 63 characters of an identifier as significant, or set a limit of 4095 characters per source line, to name two of the more egregious entries under “environmental limits”.

However, you seem to be right that the program you linked is technically well-defined C, because the C11 spec doesn’t explicitly address stack usage. Not only does it not set a minimum requirement for the limits of local variable usage or function recursion, as far as I can tell, it doesn’t even acknowledge that such limits could exist! But if the program is well-defined, it ought to be able to execute to completion. Aborting the process, even cleanly, is no more acceptable than corrupting memory. Thus, a compliant implementation would have to have an infinite amount of memory. Since that’s a bit unreasonable to ask… it’s probably better to treat stack overflow as implicitly UB. The allowable level of stack usage could then be treated as implementation-defined.

Which is not to say that compilers shouldn’t try to handle stack overflow sanely. Stack probing was long overdue, and I’d love to see better support in mainstream compilers for static max-stack-usage analysis, among other things. It’s just that the C standard is probably not the right place to mandate such things, considering how conservative and compatibility-oriented it tends to be.


Which spec? Stack probing isn't the law, it's just a good idea.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: