Hacker News new | past | comments | ask | show | jobs | submit login

> Why not?

I believe you are proposing a language-based security (langsec), which seemed very promising at first but the current consensus is that it still has to be accompanied with other measures. One big reason is that virtually no practical language implementation is fully specified.

As an example, let's say that we only have fixed-size integer variables and simple functions with no other control constructs. Integers wrap around and division by zero yields zero, so no integer operation can trap. So it should be easy to check for the infinite recursion and declare that the program would never trap otherwise, right? No! A large enough number of nested but otherwise distinct function calls would eventually overflow the stack and cause a trap or anything else. But this notion of "stack" is highly specific to the implementation, so the provable safety essentially implies that you have formalized all such implementation-specific notions in advance. Possible but extremely difficult in practice.

The "verifier and runtime sandbox" mentioned here is one solution to get around this difficulty. Instead of being able to understand the full language, the verifier is only able to understand a very reduced subset and the compiler is expected (but not guaranteed) to return something that would pass the verifier. A complex enough verifier would be able to guarantee that it is safe to execute even without a sandbox, but a verifier combined with a runtime sandbox is much simpler and more practical.






> As an example, let's say that we only have fixed-size integer variables and simple functions with no other control constructs. Integers wrap around and division by zero yields zero, so no integer operation can trap. So it should be easy to check for the infinite recursion and declare that the program would never trap otherwise, right? No! A large enough number of nested but otherwise distinct function calls would eventually overflow the stack and cause a trap or anything else.

So? Panics or traps from stack overflows don't allow 3rd party code to write to arbitrary files on my filesystem. Nor does integer overflow.

Maybe there's some clever layered attack which could pull off something like that. But, fine! Right now the state is "anyone in any crate can trivially do anything to my computer". Limiting the granted permission to only allowing panics, infinite loops, integer overflows and stack overflows sounds like a big win to me!

If people do figure out ways to turn a stack overflow in safe rust into RCE, well, that was already a soundness hole in the language. Lets fix it.


Ah, I should have clarified that. Yes, if stack overflow resulted in a trap you are mostly okay, given that the caller is eventually able to recover from the trap. But imagine that the trap didn't happen because the page table wasn't configured, like in the kernel context. Now your program will venture into some unintended memory region, yikes!

But that was about the general language-based security, and you are correct that this particular case wouldn't matter much for Cargo. I only used this example in order to show that fully verifying language-based security is very hard in general. Even Coq, a well-known proof verifier with a solid type theory and implementation, suffered from some bug that allowed `false` to be proved [1]. It's just that hard---not really feasible.

[1] https://github.com/clarus/falso


Yes, fine. Again, my north star is unsafe. Rust doesn't require that all code is safe. But it allows us to mark unsafe regions. I think that would get us pretty far.

If you want to prevent stack overflows, the compiler can calculate the maximum stack space needed by any call tree. (Well, so long as the code in question isn't recursive - but again, that could be enforced at compile time.)

That seems like something that could be checked statically. Alternatively, the kernel could dynamically allocate exactly the right amount of stack space for its own threads.


I think what you’re saying is that, in fully safe code, control flow can’t have any surprises other than panics and/or signals/exceptions. I think this is true. And I would love to use a language that limited side effects like this at the language level — even ignoring security, it makes reasoning about code easier.

The issue of build-time security is somewhat separate, and it actually seems easier to tackle strongly. There have been proposals floated around to make proc macros use wasm and run in a sandbox, and IMO Rust should absolutely move in this direction.


> And I would love to use a language that limited side effects like this at the language level — even ignoring security, it makes reasoning about code easier.

This is one of the value propositions of Roc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: