1. overflow is a "program error"
2. If `debug_assertions` are enabled, then overflow must panic
3. If overflow does not panic, then it must two's compliment wrap
This leaves the door open to always requiring a panic someday in the future, if performance gets there.
Can you expand on what that means to someone who hasn't used Rust yet? Does it mean a third thing distinct from (2) and (3)? Or is it that (2) and (3) are the things that might happen as a result of (1)?
This is similar. "program error" means "You're not supposed to do this. But the compiler will not assume that you've not done this; it has your back. If you do this your program will exhibit defined behavior. Perhaps undesired behavior (panicking, or segfaulting), but the behavior will be defined."
Mostly, it means "this is wrong but it is not undefined behavior".
> Or is it that (2) and (3) are the things that might happen as a result of (1)?
It is very risky to have different behavior on such things in test and production.
Maybe Rust++ will fix this, someday :p
(Or, once people have the tolerance for the performance degradation and we can turn it on in rustc, exactly why we specified it the way we did.)
(Or, you can tweak it to include this behavior in release mode; always an option)
Few things are harder to debug than things that pass the tests but fail in production and anything that inserts behavior like that should be avoided like the proverbial plague.
If you're already of a mindset to have a Rust++ then you are missing the point, Rust quite possibly has a window of opportunity to displace C but for that to work at a level where it succeeds rather than as an 'also ran' you will need to religiously avoid repeating the past.
Rust is not a religious language, regardless of what some people may think. It's even in the name, which evokes something practical, well used, and a bit worn.
> If you're already of a mindset to have a Rust++
Language design is about trade-offs. Do I think Rust is generally an excellent language? Yes. Does that mean that I believe we have solved programming languages, that there will never be a language better than Rust? No. Someday, Rust will be the old incumbent, and a new language will overtake it. That's how progress works.
In the end, as I said, this was a very tough call. In the end, we decided to be hardline about one thing, and one thing only: memory safety. Does Rust care about program correctness? Absolutely! Does it care about it as much as memory safety? It does not. There's several PLT features that could help improve program correctness that are not in Rust. They're not in it because it's a balance. Including them would harm several of our other objectives for the language.
This RFC was one of the most discussed at its time. 160 comments! https://github.com/rust-lang/rfcs/pull/560
We would have loved to say that it's always on, but that's just life. Nothing is ever perfect. Rust certainly is not.
1: note that I said "hardline" and not "religious" here even; even Rust's most sacred principle, memory safety, has a keyword built into the language that lets you subvert it!
The reason C has the bad reputation it does is because it makes performance over correctness trade-offs that we have come to realize that are not just far from ideal, they are fundamentally wrong.
And now Rust, the supposed replacement of C is going to make different trade-offs some of which are rooted in exactly the same performance-over-correctness decisions that gave C its bad name.
I completely get why that RFC had as much input as it did, it's akin to the Python 'whitespace' decision, it's a fundamental thing and to get it wrong will turn off a lot of people from what you are building.
On another note, integer overflow has been the cause of the same kind of issues that unsafe use of memory is associated with:
That makes it a problem in the same class and frankly I'm quite surprised that Rust would take performance over safety in this matter, in my opinion good slow code is always better than faster but possibly incorrect code.
Another reason integer overflow can turn into a vulnerability is because it's Undefined Behavior, and when encountering Undefined Behavior the compiler can do anything, including eliding bounds checks. Rust (and C with -fno-strict-overflow) prevents that by making integer overflow have a defined behavior.
That bug, like nearly all other security bugs relating to integer overflow, relies on the lack of bounds checking in C. In a language with bounds checks, that bug would not have been dangerous.
This is what I mean by tradeoffs: if Rust had significantly worse integer performance, it would also turn off a lot of people from what we are building.
> On another note, integer overflow has been the cause of the same kind of issues that unsafe use of memory is associated with:
From a quick read of this CVE, this requires memory unsafely too.
If you could manage to produce a situation where overflow causes a memory safety issue using only safe code, then we'd switch.
An integer that has wrapped gets passed into a piece of unsafe Rust code that was otherwise bullet proof, exposing a vulnerability where otherwise the program would have abended much earlier when the overflow happened.
The very best spot to trap an error is where it is first initiated, any cycles after that point are being run in what is essentially an undefined state which will sooner or later - hopefully sooner, but sometimes much later - result in either incorrect behavior, a security issue or in the most benign cases a crash. To willfully postpone the discovery of the error introduces the risk that the error will never be caught at all, the program will continue to run and will produce bogus output, spill out your state secrets or worse.
First make it work correctly, then make it fast. If you're going to worry about speed before you have it working you are falling headlong into the premature optimization trap, a trap that C programmers the world over unfortunately have extensive experience with and that I thought - perhaps mistakenly so - the Rust crowd was trying to address.
Btw, Swift seems to get this right, I wonder what their secret sauce is.
In that case, that piece of unsafe code would have a bug, which would be a bug regardless of whether overflow happened. The contract of unsafe code is that it must not expose undefined behavior.
For example, vector indexing is implemented with unsafe code, but the unsafe code performs bounds checks, so it doesn't matter whether an overflowed integer was passed in as the index.
> Btw, Swift seems to get this right, I wonder what their secret sauce is.
Their "secret sauce" is not having the same performance goals (which is not a criticism of Swift).
Yes. That still requires unsafe code. All bets are off there. You should be validating everything with regards to unsafe. There's tons of ways unsafe can go wrong; this scenario is a drop in the bucket. The bug is fundamentally in that unsafe code, not in the overflowed integer, as unsafe code is not supposed to expose memory unsafety; you could have passed a zero or a -128 or whatever manually, and it would still have caused this.
> The very best spot to trap an error is where it is first initiated,
I agree completely!
> I thought - perhaps mistakenly so - the Rust crowd was trying to address.
If you believed that Rust was about program correctness above all else, then yes, you were mistaken. As I said above, our priorities are memory safety above all else. Correctness is certainly up there, but when the rubber hits the road, hard choices have to be made.
Btw, and on the same note, I always felt that it should be possible to generate a fault on an unexpected carry so I see this as much as a CPU issue as a programming language issue.
Ideally you would run your tests in both debug and release modes, though. :)