The problem is that C compilers don't bother proving anything, when it comes to taking advantage of what is defined.
For instance, if x is signed, a C compiler can assume that x + 1 produces a value which is greater than x, and optimize accordingly. It's up to the programmer (and failing that, user) to ensure that x is less than INT_MAX prior to the execution of that x + 1. In fact there is a proof involved, but it is trivial. The compiler assumes that x + 1 > x because, by logical inference, the only way this can be false is if x == INT_MAX. But in that case, the behavior of x + 1 is undefined, and so, who cares about that; it is the programmer's responsibility that this is not the case. Q.E.D.
Rather than require compilers to implement complicated proofs, which essentially consists of a whole lot of work whose end result is only to curtail optimizations, it's far simpler to have ways in the language to tweak the semantics.
If there is a safe mode in which x + 1 is well-defined, even if x is INT_MAX, then the compiler can no longer assume that x + 1 > x. That assumption is simply off the table. (Now the difficult work of proof kicks in if you still want the same optimization: the compiler can still optimize on the assumption that x + 1 > x, but that assumption can be false, yet x + 1 can be well-defined! If an assumption can be false in correct code, then you have hard work to do: you must prove the assumption true before relying on it.)
This safe mode is not linked to optimization level. Unless turned off by the programmer, it is in effect regardless of optimization level.
Right now we have a situation in which compiler optimization or "code generation" options are used for controlling safety. Programmers know that de facto these options affect the actual semantics of C code, and so they get used that way. We know that if we tell GCC "-fno-strict-aliasing", we are not simply defeating some optimizations, but we are in effect requesting a C dialect in which cases of certain aliasing are well defined.
This is a poor situation. Why? Because it's nonportable (the next compiler you use after GCC might not have this option). But, more importantly, semantics and optimization must be decoupled from each other. We pin down the semantics we want from the language (we pin down what is well-defined), and then optimization preserves that semantics: whatever is well-defined behaves the same way regardless of how hard we optimize.
Semantics and optimization have to be independently controlled.
That is how you can eat your cake and have it too. If you think some code benefits from an optimization which assumes that x + 1 > x, you just compile that under suitable semantics. Then if you don't ensure x < INT_MAX, it's really your fault: you explicitly chose a mode in which things break if you don't ensure that, and then you didn't ensure that.
For instance, if x is signed, a C compiler can assume that x + 1 produces a value which is greater than x, and optimize accordingly. It's up to the programmer (and failing that, user) to ensure that x is less than INT_MAX prior to the execution of that x + 1. In fact there is a proof involved, but it is trivial. The compiler assumes that x + 1 > x because, by logical inference, the only way this can be false is if x == INT_MAX. But in that case, the behavior of x + 1 is undefined, and so, who cares about that; it is the programmer's responsibility that this is not the case. Q.E.D.
Rather than require compilers to implement complicated proofs, which essentially consists of a whole lot of work whose end result is only to curtail optimizations, it's far simpler to have ways in the language to tweak the semantics.
If there is a safe mode in which x + 1 is well-defined, even if x is INT_MAX, then the compiler can no longer assume that x + 1 > x. That assumption is simply off the table. (Now the difficult work of proof kicks in if you still want the same optimization: the compiler can still optimize on the assumption that x + 1 > x, but that assumption can be false, yet x + 1 can be well-defined! If an assumption can be false in correct code, then you have hard work to do: you must prove the assumption true before relying on it.)
This safe mode is not linked to optimization level. Unless turned off by the programmer, it is in effect regardless of optimization level.
Right now we have a situation in which compiler optimization or "code generation" options are used for controlling safety. Programmers know that de facto these options affect the actual semantics of C code, and so they get used that way. We know that if we tell GCC "-fno-strict-aliasing", we are not simply defeating some optimizations, but we are in effect requesting a C dialect in which cases of certain aliasing are well defined.
This is a poor situation. Why? Because it's nonportable (the next compiler you use after GCC might not have this option). But, more importantly, semantics and optimization must be decoupled from each other. We pin down the semantics we want from the language (we pin down what is well-defined), and then optimization preserves that semantics: whatever is well-defined behaves the same way regardless of how hard we optimize.
Semantics and optimization have to be independently controlled.
That is how you can eat your cake and have it too. If you think some code benefits from an optimization which assumes that x + 1 > x, you just compile that under suitable semantics. Then if you don't ensure x < INT_MAX, it's really your fault: you explicitly chose a mode in which things break if you don't ensure that, and then you didn't ensure that.