You know, not long ago I took somebody's "how well do you know C?" quizzes, and it was fully of this sort of question -- what C does with overflows and underflows in various circumstances. And I must admit, I felt like I had been asked what happens, exactly and specifically, to a particular piece of memory after dividing by zero. "I don't know, I try to avoid doing that!"
I don't know. I can admire the analysis, but I don't understand the motive. Do people really write code that relies on this sort of behavior? Or is it just trivia for trivia's sake?
C's type system is a minefield. (and by extension, C++'s and Objective-C's) I've been bitten so many times by implicit signed/unsigned conversions and int-promotion that I'm now practically paranoid about the whole thing. Chars are promoted to int practically anywhere you use them for anything other than just copying, so if the compilers produce faulty code in those situations, there's no way you can win.
Oh, and speaking of chars: Objective-C's BOOL is really just a char. Yes, it's signed, and yes it gets int-promoted a lot. I dread to think how many bugs are lurking out there in Objective-C code because of that. I wonder if you could catch some of those by comparing the code generated by compiling with the usual BOOL = char typedef, and the same code but with BOOL typedef'd to _Bool (a real boolean type).
If it was a week or two ago, the quiz was probably from the same person as this article.
He's not just doing it for fun; he's a professor at the University of Utah, and he's researching this area, looking for bugs in compilers. In fact, he's developed a tool for this: http://embed.cs.utah.edu/csmith/
These tiny bits of strange code are condensed versions of what you might see in the wild, especially after preprocessing.
Nobody's doing ++x > y, but they do something that looks reasonable like foo(x) > bar(x), where foo() and bar() return chars.
It's interesting when trying to do automated code analysis, and C is an important target for that, since it's used everywhere. But computer programs have a hard time to know when type limits are important and when they're not (or in fact not anticipated by the developer) - so they have to cover all cases.
(IMHO) For obscure cases, there's ideally some clearer version of the same behaviour that could be recommended to the developer - either helping them to find a potential error or to have them use a less ambiguous and easier to understand notation.
I do similar deep digging for some topics. It may seem useless, and a lot of times it is. But sometimes digging into a problem helps me understand the bigger picture better, and then it seems worthwhile. Lately I've become more picky about what I'll dig into though.