I'm not certain that "the program should do roughly what the author expects in the face of arbitrary program incorrectness" is a thing that works in general.
What should your program do if you overflow a 32 bit signed integer? What if you overflow a 64 bit signed integer? Should it do the "expected" thing and wrap according to two's complement? How do you intend to do that efficiently in a portable manner?
This rabbit hole can be chased forever until you are left with every C++ implementation being a full blown interpreter.
While I agree with your statement in general, I think this is not a good example. Integer arithmetic is two's complement on all platforms of any interest from the last 30+ years, and unsigned integer overflow is already well defined in the standard. It's just an arcane decision to keep signed integer overflow UB as a performance hack.
This is especially true given that (a) there is no standard compliant way of checking if you had overflow, and (b) the most pedantically correct way of looping over arrays and such uses an unsigned type (size_t), negating the usual claims about optimization opportunities.
Unsigned integer overflow is well defined. And because of this, you see worse performance when working with unsigned integers in many cases, because the compiler needs to stick instructions in there to catch overflow at the appropriate width. It cannot just use hardware flags because that 32 bit unsigned integer needs to wrap correctly on both 32 bit and 64 bit machines.
Thus the word "efficiently" in my comment. By requiring weird and almost certainly buggy behavior to be defined, you require the program to behave more like an interpreted program, with associated performance costs.