Before starting reading everything note that author assumed when writing the article that (provided sizeof( int ) > sizeof( char )):
char c = (char)128;
printf( "%d\n", c );
should always be -128, whereas his commenter Mans (March 4, 2011 at 4:15 am) points out that this conversion is implementation-defined according to the standard, that is, compiler authors are free to decide what the result of such conversion should be.
While that's how it is generally understood, there is nothing stopping the implementation from defining the behaviour as:
conversion is rounded up to the nearest multiple of 20 on odd lines.
conversion is rounded down to the nearest multiple of 17 on even lines.
Older versions of gcc would have such a (fully standard compliant) behavior when you used #pragma, that included running rogue or nethack and other stuff -- but later versions actually succumbed to implementing useful pragmas.
That depends on what you mean by "should". Is it required by the standard? AFAIK, no. Would it be a braindead decision on the part of the compiler writers to do otherwise? Almost certainly. In practice, the answer to
It's true, but it doesn't affect the analysis. When x is CHAR_MAX and char is signed, the result of ++x is an implementation-defined value with type char - but it doesn't matter what this implementation-defined value is, it must always be less than or equal to CHAR_MAX, so ++x > CHAR_MAX is false no matter what the resolution of the implementation-defined behaviour is.
If the ++x is performed using int, there must be a downcast to store it in the char x if you even need to store it (and in the given function you don't), but nowhere I see that the standard insists that that very int can't be used in the comparison, so 128 > 127 is OK. Note that the right side of comparison can be int from the start, even if the right side can fit the char. The only thing wrong was the older clang.
So the whole big article can be shortended just to: "clang in 2.7 had a bug but newer version doesn't, all other compilers are OK."
The standard specifies that ++x is equivalent to x+=1, and that the value of an assignment expression like x+=1 is "the value of the left operand after the assignment" (and that "the type of an assignment expression is the type of the left operand").
So, since x is of type char, the expression ++x has type char and its value is the value of x after the assignment, so it must be in the range CHAR_MIN to CHAR_MAX.
You miss that ++x is not an "x" it's an expression for which the op also writes "The usual arithmetic conversions ensure that two operands to a '+' operator of type signed char are both promoted to signed int before the addition is performed" and I believe that that can be found in the text of the standard too. The underlying idea is to allow most of the run-time calculations to be performed with the "native" most efficient type, which is int. I still claim it's implementation dependent and allowable to evaluate to "128 > 127". I won't discuss this more, I accept that you have a different opinion.
The usual arithmetic conversions do cause the x and the 1 in the implicit x+=1 to be promoted to int before the addition, and the addition is performed in an int. However, since the lvalue that is the left hand operand of the assignment has type char, the result is converted back to char (which, if char is signed and the result is not in the range of char gives an implementation-defined result) and stored in the object designated by the lvalue. The result of the expression is this converted char value.
The important point here is that the result of an assignment expression (including pre- and post- increment and decrement) is the value that was stored in the object that was assigned to.
You can readily check this by examining the value of sizeof ++x (where x has type char).