Is this a real concern, beyond 'experts panel' esoteric discussion? Do folks really put a number into an int, that is sometimes going to need to be exactly TYPE_MAX but no larger?
I've gone a lifetime programming, and this kind of stuff never, ever matters one iota.
Yes, people really do care about overflow. Because it gets used in security checks, and if they don't understand the behavior then their security checks don't do what they expected.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=30475 shows someone going hyperbolic over the issue. The technical arguments favor the GCC maintainers. However I prefer the position of the person going hyperbolic.
That example was not 'overflow'. It was 'off by one'? That seems uninteresting, outside as you say the security issue where somebody might take advantage of it.
That example absolutely was overflow. The bug is, "assert(int+100 > int) optimized away".
GCC has the behavior that overflowing a signed integer gives you a negative one. But an if tests that TESTS for that is optimized away!
The reason is that overflow is undefined behavior, and therefore they are within their rights to do anything that they want. So they actually overflow in the fastest way possible, and optimize code on the assumption that overflow can't happen.
The fact that almost no programmers have a mental model of the language that reconciles these two facts is an excellent reason to say that very few programmers should write in C. Because the compiler really is out to get you.
The very few times I've ever put in a check like that, I always do something like i < MAX_INT - 5 just to be sure, because I'm never confident that I intuitively understand off-by-one errors.
Same here. But I instead run a loop over a range around MAX_INT (or wherever the issue is) and print the result, so I know I'm doing what I think I'm doing. Exhaustive testing is quick, with a computer!
This isn't a good idea either: if you're dealing with undefined behavior the way the complier translates your code can change from version to version, so you could end up with a code that works with the current version of GCC but doesn't work on the next. Personally I don't agree with the way GCC and other comipers deal with UB, but this would be off topic.
unsigned mul(unsigned short x, unsigned short y)
{ return x*y; }
in a way that causes calling code to behave in meaningless fashion if x would exceed INT_MAX/y [something gcc will sometimes actually do, by the way, with that exact function!], the hardware isn't going to have any say in that.
I've gone a lifetime programming, and this kind of stuff never, ever matters one iota.