I don't know. I can admire the analysis, but I don't understand the motive. Do people really write code that relies on this sort of behavior? Or is it just trivia for trivia's sake?
Oh, and speaking of chars: Objective-C's BOOL is really just a char. Yes, it's signed, and yes it gets int-promoted a lot. I dread to think how many bugs are lurking out there in Objective-C code because of that. I wonder if you could catch some of those by comparing the code generated by compiling with the usual BOOL = char typedef, and the same code but with BOOL typedef'd to _Bool (a real boolean type).
He's not just doing it for fun; he's a professor at the University of Utah, and he's researching this area, looking for bugs in compilers. In fact, he's developed a tool for this: http://embed.cs.utah.edu/csmith/
These tiny bits of strange code are condensed versions of what you might see in the wild, especially after preprocessing.
Nobody's doing ++x > y, but they do something that looks reasonable like foo(x) > bar(x), where foo() and bar() return chars.
I might write something like "++x > y"; preincrement followed by comparison is a common operation.
(IMHO) For obscure cases, there's ideally some clearer version of the same behaviour that could be recommended to the developer - either helping them to find a potential error or to have them use a less ambiguous and easier to understand notation.
Much more likely is some code relying on it without realizing it, and getting "random" bugs for some input values.