Is it unreasonable for me to want a C/++ standard that's just "What's mostly standard by all major compilers" so we can stop nitpicking about implementation details most people never encounter?
At the time the first version of the C standard was being prepared, the practice of writing C that would only work with the ‘major compiler[s]’ had the nickname “All the world's a VAX”. Now consider the last time you used a VAX.
The VAX is a little-endian, 8-bit bytes, 32-bit datapath, 2's complement machine.
Consider the last time you used a machine with big endian, non-8-bit-bytes, and something other than 2's complement integer representation...
At the time of the C standard, architectures were far more diverse; for better or worse, most of the more esoteric ones seem to have died off, leaving us with the ones that do happen to share many characteristics with a VAX.
I used a 6-bit variable wordlength
decimal arithmetic machine on Saturday. (See my post on mining Bitcoin on the IBM 1401.) And someone is writing a C compiled for this machine...
I can't think of any general-purpose architectures in common use which are really odd, but some DSPs have a 24-bit "byte". The microcontroller world is a bit more diverse too, especially with many of them being Harvard architectures (e.g. C compiles to 8051 with 24-bit big-endian pointers, and the various PICs have odd instruction-word sizes like 14 bits).
However, all of those still use 2's-complement integers. According to Wikipedia a few existing mainframes use 1's complement (CDC 3000/6000, UNIVAC 1100/2200).
I know the standard allows different bit representations, but practically speaking, today you're almost never going to find anything other than 2's complement integers and IEEE floating-point; and if you do happen to be working on one of those rare and unusual machines that don't, I think it'd be the least of your worries... even Linus says he doesn't care about Linux running on non-two's-complement or odd byte-size machines.