I'm not a fan of this argument. It all hinges on the assumption that information density is what a language should optimize for. The problem is that ease of disambiguation is also an extremely important feature in human languages and one that tends to oppose the goal of information density. In the extreme form, information density would suggest reducing all characters to binary or some other compressed encoding, but due to the necessity of easy disambiguation no human language could get to such an extreme state and remain widespread.
One observed example of the importance of disambiguation is the phenomena of extremely common words becoming irregular. In virtually every single language that has verb conjugations, the conjugations of the most common verbs—to be, to have, to do, etc—are irregular. As languages change and acquire verb conjugation, there is actually a documented process of irregularization for the most common words! The same is true with phonetic irregularities (e.g. "you" in English or the は particle in Japanese).
Language is read far more often than it is written. Especially in this age of electronic input, stroke reduction is a trivial concern compared to recognition of very dissimilar words such as "nothing" vs "sky" (無 vs 天) / (无 vs 天) or "noodle" vs "face" (麵 vs 面) / (面 vs 面). I think the Chinese character simplification is a fine short-hand tool, but it's not great for literacy and it's not great. Japanese simplifications, on the other hand, were a lot less extreme. They simplified some of the most ridiculous traditional characters, but generally left the semantic information in characters intact.
I agree, Chinese simplification was extreme, whereas the Japanese simplification tried to retain the idea of the original character. Possibly, an additional political motivation of the simplification was to make it harder to read old literature. This was the same regime that launched the cultural revolution.
> with a whopping differential of 21 strokes, 廳→厅 (hall) is the most drastic change in the entire script. As 丁’s phonetic clue is [not very useful], several users of traditional react with horror to this change
Perhaps because that character is often used together with the 餐(dining) character which was never simplified to make 餐厅(restaurant), which looks jarring when used together. Some good news is that 餐 is often unofficially simplified to its topleft component only, i.e. 歺, which is the 16 strokes reduced down to 5. I often see 歺厅 on signs near where I live (Wuhan) which looks much cleaner. Perhaps another round of character simplification by the PRC is in order!
Edit: Just noticed the picture in the linked-to source showing the same thing. I must have chosen a very common example.
One observed example of the importance of disambiguation is the phenomena of extremely common words becoming irregular. In virtually every single language that has verb conjugations, the conjugations of the most common verbs—to be, to have, to do, etc—are irregular. As languages change and acquire verb conjugation, there is actually a documented process of irregularization for the most common words! The same is true with phonetic irregularities (e.g. "you" in English or the は particle in Japanese).
Language is read far more often than it is written. Especially in this age of electronic input, stroke reduction is a trivial concern compared to recognition of very dissimilar words such as "nothing" vs "sky" (無 vs 天) / (无 vs 天) or "noodle" vs "face" (麵 vs 面) / (面 vs 面). I think the Chinese character simplification is a fine short-hand tool, but it's not great for literacy and it's not great. Japanese simplifications, on the other hand, were a lot less extreme. They simplified some of the most ridiculous traditional characters, but generally left the semantic information in characters intact.
I used to write quite a bit about this topic: http://toshuo.com/2009/japanese-character-simplification-via...