There are a lot of arithmetic conditions for which C could generate special code. There are div_t-related functions for the other direction. I for one would like a good way to obtain, using some Standard C coding pattern, fast "carry" for multiple-precision integer arithmetic.
Several places in support functions, I have coded unusually to avoid wrap-around etc. I bet you could devise something like that for (unsigned) multiplication.
A horrifying case was multiplication in an x86 emulator. The opcode handler needed to multiply a pair of unsigned 16-bit values, then return a 64-bit result.
The uint16_t got promoted to an int for the multiplication, causing undefined behavior. (if I remember right, the result was assigned to a uint16_t as well, making the intent clear) The compiler then assumed that the 32-bit intermediate couldn't possibly have the sign bit set, so it wouldn't matter if promotion to a 64-bit value had sign extension or zero extension. Depending on the optimization level, the compiler would do one or the other.
This is truly awful behavior. It should not be permitted.
I can't really blame gcc for that one, since the most straightforward way of using signed integer arithmetic would yield a negative value if the result is bigger than INT_MAX, but it would be very weird that programs would expect and rely upon that behavior.
On the other hand, even the function "unsigned mul_mod_65536(unsigned short x, unsigned short y) { return (x * y) & 0xFFFF; }" which the authors of the Standard would have expected commonplace implementations to process in consistent fashion for all possible values of "x" and "y" [the Rationale describes their expectations] will sometimes cause gcc to jump the rails if the arithmetical value of the product exceeds INT_MAX, despite the fact that the sign bit of the computation is ignored. If, for example, the product would exceed INT_MAX on the second iteration of a loop that should run a variable number of iterations, gcc will replace the loop for code that just handles the first iteration.
See post above. There is no good way for compilers to handle that case, but gcc gets "creative" even in cases where the authors of C89 made their intentions clear.
Numeric overflows in things like calculation of buffer sizes can lead to vulnerabilities.
Signed overflow is UB, and due to integer promotion signs creep in unexpected places.
It's not trivial to check if overflow happened due to UB rules. A naive check can make things even worse by "proving" the opposite to the optimizer.
And all of that is to read one bit that CPUs have readily available.